00:00:00.000 Started by upstream project "autotest-per-patch" build number 132843 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.070 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.070 The recommended git tool is: git 00:00:00.070 using credential 00000000-0000-0000-0000-000000000002 00:00:00.072 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.085 Fetching changes from the remote Git repository 00:00:00.087 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.105 Using shallow fetch with depth 1 00:00:00.105 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.105 > git --version # timeout=10 00:00:00.122 > git --version # 'git version 2.39.2' 00:00:00.122 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.139 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.139 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.295 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.306 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.318 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.318 > git config core.sparsecheckout # timeout=10 00:00:05.328 > git read-tree -mu HEAD # timeout=10 00:00:05.345 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.367 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.367 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.454 [Pipeline] Start of Pipeline 00:00:05.467 [Pipeline] library 00:00:05.469 Loading library shm_lib@master 00:00:05.469 Library shm_lib@master is cached. Copying from home. 00:00:05.485 [Pipeline] node 00:00:05.498 Running on WFP3 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.500 [Pipeline] { 00:00:05.506 [Pipeline] catchError 00:00:05.508 [Pipeline] { 00:00:05.519 [Pipeline] wrap 00:00:05.528 [Pipeline] { 00:00:05.535 [Pipeline] stage 00:00:05.537 [Pipeline] { (Prologue) 00:00:05.874 [Pipeline] sh 00:00:06.157 + logger -p user.info -t JENKINS-CI 00:00:06.173 [Pipeline] echo 00:00:06.175 Node: WFP3 00:00:06.181 [Pipeline] sh 00:00:06.475 [Pipeline] setCustomBuildProperty 00:00:06.486 [Pipeline] echo 00:00:06.487 Cleanup processes 00:00:06.491 [Pipeline] sh 00:00:06.772 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.772 3982537 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.782 [Pipeline] sh 00:00:07.061 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.061 ++ grep -v 'sudo pgrep' 00:00:07.061 ++ awk '{print $1}' 00:00:07.061 + sudo kill -9 00:00:07.061 + true 00:00:07.076 [Pipeline] cleanWs 00:00:07.085 [WS-CLEANUP] Deleting project workspace... 00:00:07.085 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.091 [WS-CLEANUP] done 00:00:07.095 [Pipeline] setCustomBuildProperty 00:00:07.108 [Pipeline] sh 00:00:07.388 + sudo git config --global --replace-all safe.directory '*' 00:00:07.490 [Pipeline] httpRequest 00:00:07.802 [Pipeline] echo 00:00:07.803 Sorcerer 10.211.164.20 is alive 00:00:07.811 [Pipeline] retry 00:00:07.812 [Pipeline] { 00:00:07.823 [Pipeline] httpRequest 00:00:07.826 HttpMethod: GET 00:00:07.827 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.828 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.830 Response Code: HTTP/1.1 200 OK 00:00:07.830 Success: Status code 200 is in the accepted range: 200,404 00:00:07.831 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.938 [Pipeline] } 00:00:08.956 [Pipeline] // retry 00:00:08.963 [Pipeline] sh 00:00:09.246 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.262 [Pipeline] httpRequest 00:00:09.576 [Pipeline] echo 00:00:09.578 Sorcerer 10.211.164.20 is alive 00:00:09.588 [Pipeline] retry 00:00:09.590 [Pipeline] { 00:00:09.605 [Pipeline] httpRequest 00:00:09.610 HttpMethod: GET 00:00:09.610 URL: http://10.211.164.20/packages/spdk_7e2e6826353126a1d8560bb520840984d100c970.tar.gz 00:00:09.611 Sending request to url: http://10.211.164.20/packages/spdk_7e2e6826353126a1d8560bb520840984d100c970.tar.gz 00:00:09.631 Response Code: HTTP/1.1 200 OK 00:00:09.632 Success: Status code 200 is in the accepted range: 200,404 00:00:09.632 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_7e2e6826353126a1d8560bb520840984d100c970.tar.gz 00:01:13.718 [Pipeline] } 00:01:13.736 [Pipeline] // retry 00:01:13.743 [Pipeline] sh 00:01:14.026 + tar --no-same-owner -xf spdk_7e2e6826353126a1d8560bb520840984d100c970.tar.gz 00:01:16.593 [Pipeline] sh 00:01:16.877 + git -C spdk log --oneline -n5 00:01:16.877 7e2e68263 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:16.877 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:16.877 66289a6db build: use VERSION file for storing version 00:01:16.877 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:16.877 cec5ba284 nvme/rdma: Register UMR per IO request 00:01:16.888 [Pipeline] } 00:01:16.901 [Pipeline] // stage 00:01:16.910 [Pipeline] stage 00:01:16.912 [Pipeline] { (Prepare) 00:01:16.927 [Pipeline] writeFile 00:01:16.942 [Pipeline] sh 00:01:17.225 + logger -p user.info -t JENKINS-CI 00:01:17.237 [Pipeline] sh 00:01:17.519 + logger -p user.info -t JENKINS-CI 00:01:17.531 [Pipeline] sh 00:01:17.814 + cat autorun-spdk.conf 00:01:17.814 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.814 SPDK_TEST_NVMF=1 00:01:17.814 SPDK_TEST_NVME_CLI=1 00:01:17.814 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.814 SPDK_TEST_NVMF_NICS=e810 00:01:17.814 SPDK_TEST_VFIOUSER=1 00:01:17.814 SPDK_RUN_UBSAN=1 00:01:17.814 NET_TYPE=phy 00:01:17.821 RUN_NIGHTLY=0 00:01:17.826 [Pipeline] readFile 00:01:17.849 [Pipeline] withEnv 00:01:17.851 [Pipeline] { 00:01:17.864 [Pipeline] sh 00:01:18.148 + set -ex 00:01:18.148 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:18.148 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:18.148 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.148 ++ SPDK_TEST_NVMF=1 00:01:18.148 ++ SPDK_TEST_NVME_CLI=1 00:01:18.148 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.148 ++ SPDK_TEST_NVMF_NICS=e810 00:01:18.148 ++ SPDK_TEST_VFIOUSER=1 00:01:18.148 ++ SPDK_RUN_UBSAN=1 00:01:18.148 ++ NET_TYPE=phy 00:01:18.148 ++ RUN_NIGHTLY=0 00:01:18.148 + case $SPDK_TEST_NVMF_NICS in 00:01:18.148 + DRIVERS=ice 00:01:18.148 + [[ tcp == \r\d\m\a ]] 00:01:18.148 + [[ -n ice ]] 00:01:18.148 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:18.148 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:18.148 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:18.148 rmmod: ERROR: Module i40iw is not currently loaded 00:01:18.148 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:18.148 + true 00:01:18.148 + for D in $DRIVERS 00:01:18.148 + sudo modprobe ice 00:01:18.148 + exit 0 00:01:18.158 [Pipeline] } 00:01:18.172 [Pipeline] // withEnv 00:01:18.177 [Pipeline] } 00:01:18.192 [Pipeline] // stage 00:01:18.201 [Pipeline] catchError 00:01:18.203 [Pipeline] { 00:01:18.216 [Pipeline] timeout 00:01:18.216 Timeout set to expire in 1 hr 0 min 00:01:18.217 [Pipeline] { 00:01:18.231 [Pipeline] stage 00:01:18.233 [Pipeline] { (Tests) 00:01:18.249 [Pipeline] sh 00:01:18.539 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.539 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.539 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.539 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:18.539 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.539 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:18.539 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:18.539 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:18.539 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:18.539 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:18.539 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:18.539 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.539 + source /etc/os-release 00:01:18.539 ++ NAME='Fedora Linux' 00:01:18.539 ++ VERSION='39 (Cloud Edition)' 00:01:18.539 ++ ID=fedora 00:01:18.539 ++ VERSION_ID=39 00:01:18.539 ++ VERSION_CODENAME= 00:01:18.539 ++ PLATFORM_ID=platform:f39 00:01:18.539 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:18.539 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:18.539 ++ LOGO=fedora-logo-icon 00:01:18.539 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:18.539 ++ HOME_URL=https://fedoraproject.org/ 00:01:18.539 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:18.539 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:18.539 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:18.539 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:18.539 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:18.539 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:18.539 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:18.539 ++ SUPPORT_END=2024-11-12 00:01:18.539 ++ VARIANT='Cloud Edition' 00:01:18.539 ++ VARIANT_ID=cloud 00:01:18.539 + uname -a 00:01:18.539 Linux spdk-wfp-03 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:18.539 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:21.832 Hugepages 00:01:21.832 node hugesize free / total 00:01:21.832 node0 1048576kB 0 / 0 00:01:21.832 node0 2048kB 0 / 0 00:01:21.832 node1 1048576kB 0 / 0 00:01:21.832 node1 2048kB 0 / 0 00:01:21.832 00:01:21.832 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:21.832 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:21.832 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:21.832 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:21.832 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:21.832 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:21.832 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:21.832 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:21.832 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:21.832 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:21.832 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:01:21.832 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:21.832 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:21.832 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:21.832 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:21.832 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:21.832 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:21.832 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:21.832 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:21.832 + rm -f /tmp/spdk-ld-path 00:01:21.832 + source autorun-spdk.conf 00:01:21.832 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.832 ++ SPDK_TEST_NVMF=1 00:01:21.832 ++ SPDK_TEST_NVME_CLI=1 00:01:21.832 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.832 ++ SPDK_TEST_NVMF_NICS=e810 00:01:21.832 ++ SPDK_TEST_VFIOUSER=1 00:01:21.832 ++ SPDK_RUN_UBSAN=1 00:01:21.832 ++ NET_TYPE=phy 00:01:21.832 ++ RUN_NIGHTLY=0 00:01:21.832 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:21.832 + [[ -n '' ]] 00:01:21.832 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:21.832 + for M in /var/spdk/build-*-manifest.txt 00:01:21.832 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:21.832 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:21.832 + for M in /var/spdk/build-*-manifest.txt 00:01:21.832 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:21.832 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:21.832 + for M in /var/spdk/build-*-manifest.txt 00:01:21.832 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:21.832 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:21.832 ++ uname 00:01:21.832 + [[ Linux == \L\i\n\u\x ]] 00:01:21.832 + sudo dmesg -T 00:01:21.832 + sudo dmesg --clear 00:01:21.832 + dmesg_pid=3984119 00:01:21.832 + [[ Fedora Linux == FreeBSD ]] 00:01:21.832 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.832 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.832 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:21.832 + [[ -x /usr/src/fio-static/fio ]] 00:01:21.832 + export FIO_BIN=/usr/src/fio-static/fio 00:01:21.832 + FIO_BIN=/usr/src/fio-static/fio 00:01:21.832 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:21.832 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:21.832 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:21.832 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.832 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.832 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:21.832 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.832 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.832 + sudo dmesg -Tw 00:01:21.832 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:21.832 09:39:31 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:21.832 09:39:31 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:21.832 09:39:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.832 09:39:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:21.832 09:39:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:21.832 09:39:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.832 09:39:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:21.832 09:39:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:21.832 09:39:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:21.832 09:39:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:21.832 09:39:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:21.832 09:39:31 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:21.832 09:39:31 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:21.832 09:39:31 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:21.832 09:39:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:21.832 09:39:31 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:21.832 09:39:31 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:21.832 09:39:31 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:21.832 09:39:31 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:21.832 09:39:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.832 09:39:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.832 09:39:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.832 09:39:31 -- paths/export.sh@5 -- $ export PATH 00:01:21.833 09:39:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.833 09:39:31 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:21.833 09:39:31 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:21.833 09:39:31 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733906371.XXXXXX 00:01:21.833 09:39:31 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733906371.NwF9t6 00:01:21.833 09:39:31 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:21.833 09:39:31 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:21.833 09:39:31 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:21.833 09:39:31 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:21.833 09:39:31 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:21.833 09:39:31 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:21.833 09:39:31 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:21.833 09:39:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.092 09:39:31 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:22.092 09:39:31 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:22.092 09:39:31 -- pm/common@17 -- $ local monitor 00:01:22.092 09:39:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.092 09:39:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.092 09:39:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.092 09:39:31 -- pm/common@21 -- $ date +%s 00:01:22.092 09:39:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.092 09:39:31 -- pm/common@21 -- $ date +%s 00:01:22.092 09:39:31 -- pm/common@25 -- $ sleep 1 00:01:22.092 09:39:31 -- pm/common@21 -- $ date +%s 00:01:22.092 09:39:31 -- pm/common@21 -- $ date +%s 00:01:22.092 09:39:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733906371 00:01:22.092 09:39:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733906371 00:01:22.092 09:39:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733906371 00:01:22.092 09:39:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733906371 00:01:22.092 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733906371_collect-cpu-temp.pm.log 00:01:22.092 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733906371_collect-vmstat.pm.log 00:01:22.092 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733906371_collect-cpu-load.pm.log 00:01:22.092 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733906371_collect-bmc-pm.bmc.pm.log 00:01:23.030 09:39:32 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:23.030 09:39:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:23.030 09:39:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:23.030 09:39:32 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.030 09:39:32 -- spdk/autobuild.sh@16 -- $ date -u 00:01:23.030 Wed Dec 11 08:39:32 AM UTC 2024 00:01:23.030 09:39:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:23.030 v25.01-pre-332-g7e2e68263 00:01:23.030 09:39:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:23.030 09:39:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:23.030 09:39:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:23.030 09:39:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:23.030 09:39:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:23.030 09:39:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.030 ************************************ 00:01:23.030 START TEST ubsan 00:01:23.030 ************************************ 00:01:23.030 09:39:32 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:23.030 using ubsan 00:01:23.030 00:01:23.030 real 0m0.000s 00:01:23.030 user 0m0.000s 00:01:23.030 sys 0m0.000s 00:01:23.030 09:39:32 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:23.030 09:39:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:23.030 ************************************ 00:01:23.030 END TEST ubsan 00:01:23.030 ************************************ 00:01:23.030 09:39:32 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:23.030 09:39:32 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:23.030 09:39:32 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:23.030 09:39:32 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:23.030 09:39:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:23.030 09:39:32 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:23.030 09:39:32 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:23.030 09:39:32 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:23.030 09:39:32 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:23.289 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:23.289 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:23.548 Using 'verbs' RDMA provider 00:01:36.700 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:48.973 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:48.973 Creating mk/config.mk...done. 00:01:48.973 Creating mk/cc.flags.mk...done. 00:01:48.973 Type 'make' to build. 00:01:48.973 09:39:57 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:48.973 09:39:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:48.973 09:39:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:48.973 09:39:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.973 ************************************ 00:01:48.973 START TEST make 00:01:48.973 ************************************ 00:01:48.973 09:39:57 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:50.361 The Meson build system 00:01:50.361 Version: 1.5.0 00:01:50.361 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:50.361 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:50.361 Build type: native build 00:01:50.361 Project name: libvfio-user 00:01:50.361 Project version: 0.0.1 00:01:50.361 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:50.361 C linker for the host machine: cc ld.bfd 2.40-14 00:01:50.361 Host machine cpu family: x86_64 00:01:50.361 Host machine cpu: x86_64 00:01:50.361 Run-time dependency threads found: YES 00:01:50.361 Library dl found: YES 00:01:50.361 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:50.361 Run-time dependency json-c found: YES 0.17 00:01:50.361 Run-time dependency cmocka found: YES 1.1.7 00:01:50.361 Program pytest-3 found: NO 00:01:50.361 Program flake8 found: NO 00:01:50.361 Program misspell-fixer found: NO 00:01:50.361 Program restructuredtext-lint found: NO 00:01:50.361 Program valgrind found: YES (/usr/bin/valgrind) 00:01:50.361 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:50.361 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:50.361 Compiler for C supports arguments -Wwrite-strings: YES 00:01:50.361 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:50.361 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:50.361 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:50.361 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:50.361 Build targets in project: 8 00:01:50.361 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:50.361 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:50.361 00:01:50.361 libvfio-user 0.0.1 00:01:50.361 00:01:50.361 User defined options 00:01:50.361 buildtype : debug 00:01:50.361 default_library: shared 00:01:50.361 libdir : /usr/local/lib 00:01:50.361 00:01:50.361 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.296 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:51.296 [1/37] Compiling C object samples/null.p/null.c.o 00:01:51.296 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:51.296 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:51.296 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:51.296 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:51.296 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:51.296 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:51.296 [8/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:51.296 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:51.296 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:51.296 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:51.296 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:51.296 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:51.296 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:51.296 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:51.296 [16/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:51.296 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:51.296 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:51.296 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:51.296 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:51.296 [21/37] Compiling C object samples/server.p/server.c.o 00:01:51.296 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:51.296 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:51.296 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:51.296 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:51.296 [26/37] Compiling C object samples/client.p/client.c.o 00:01:51.296 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:51.296 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:51.296 [29/37] Linking target samples/client 00:01:51.296 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:51.296 [31/37] Linking target test/unit_tests 00:01:51.554 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:51.554 [33/37] Linking target samples/null 00:01:51.554 [34/37] Linking target samples/lspci 00:01:51.554 [35/37] Linking target samples/gpio-pci-idio-16 00:01:51.554 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:51.554 [37/37] Linking target samples/server 00:01:51.554 INFO: autodetecting backend as ninja 00:01:51.554 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:51.554 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:52.120 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:52.120 ninja: no work to do. 00:01:57.387 The Meson build system 00:01:57.387 Version: 1.5.0 00:01:57.387 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:57.387 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:57.387 Build type: native build 00:01:57.387 Program cat found: YES (/usr/bin/cat) 00:01:57.387 Project name: DPDK 00:01:57.387 Project version: 24.03.0 00:01:57.387 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:57.387 C linker for the host machine: cc ld.bfd 2.40-14 00:01:57.387 Host machine cpu family: x86_64 00:01:57.387 Host machine cpu: x86_64 00:01:57.387 Message: ## Building in Developer Mode ## 00:01:57.387 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:57.387 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:57.387 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:57.387 Program python3 found: YES (/usr/bin/python3) 00:01:57.387 Program cat found: YES (/usr/bin/cat) 00:01:57.387 Compiler for C supports arguments -march=native: YES 00:01:57.387 Checking for size of "void *" : 8 00:01:57.387 Checking for size of "void *" : 8 (cached) 00:01:57.387 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:57.387 Library m found: YES 00:01:57.387 Library numa found: YES 00:01:57.387 Has header "numaif.h" : YES 00:01:57.387 Library fdt found: NO 00:01:57.387 Library execinfo found: NO 00:01:57.387 Has header "execinfo.h" : YES 00:01:57.387 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:57.387 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:57.387 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:57.387 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:57.387 Run-time dependency openssl found: YES 3.1.1 00:01:57.387 Run-time dependency libpcap found: YES 1.10.4 00:01:57.387 Has header "pcap.h" with dependency libpcap: YES 00:01:57.387 Compiler for C supports arguments -Wcast-qual: YES 00:01:57.387 Compiler for C supports arguments -Wdeprecated: YES 00:01:57.387 Compiler for C supports arguments -Wformat: YES 00:01:57.387 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:57.387 Compiler for C supports arguments -Wformat-security: NO 00:01:57.387 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:57.387 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:57.387 Compiler for C supports arguments -Wnested-externs: YES 00:01:57.387 Compiler for C supports arguments -Wold-style-definition: YES 00:01:57.387 Compiler for C supports arguments -Wpointer-arith: YES 00:01:57.387 Compiler for C supports arguments -Wsign-compare: YES 00:01:57.387 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:57.387 Compiler for C supports arguments -Wundef: YES 00:01:57.387 Compiler for C supports arguments -Wwrite-strings: YES 00:01:57.387 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:57.387 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:57.387 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:57.387 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:57.387 Program objdump found: YES (/usr/bin/objdump) 00:01:57.387 Compiler for C supports arguments -mavx512f: YES 00:01:57.387 Checking if "AVX512 checking" compiles: YES 00:01:57.387 Fetching value of define "__SSE4_2__" : 1 00:01:57.388 Fetching value of define "__AES__" : 1 00:01:57.388 Fetching value of define "__AVX__" : 1 00:01:57.388 Fetching value of define "__AVX2__" : 1 00:01:57.388 Fetching value of define "__AVX512BW__" : 1 00:01:57.388 Fetching value of define "__AVX512CD__" : 1 00:01:57.388 Fetching value of define "__AVX512DQ__" : 1 00:01:57.388 Fetching value of define "__AVX512F__" : 1 00:01:57.388 Fetching value of define "__AVX512VL__" : 1 00:01:57.388 Fetching value of define "__PCLMUL__" : 1 00:01:57.388 Fetching value of define "__RDRND__" : 1 00:01:57.388 Fetching value of define "__RDSEED__" : 1 00:01:57.388 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:57.388 Fetching value of define "__znver1__" : (undefined) 00:01:57.388 Fetching value of define "__znver2__" : (undefined) 00:01:57.388 Fetching value of define "__znver3__" : (undefined) 00:01:57.388 Fetching value of define "__znver4__" : (undefined) 00:01:57.388 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:57.388 Message: lib/log: Defining dependency "log" 00:01:57.388 Message: lib/kvargs: Defining dependency "kvargs" 00:01:57.388 Message: lib/telemetry: Defining dependency "telemetry" 00:01:57.388 Checking for function "getentropy" : NO 00:01:57.388 Message: lib/eal: Defining dependency "eal" 00:01:57.388 Message: lib/ring: Defining dependency "ring" 00:01:57.388 Message: lib/rcu: Defining dependency "rcu" 00:01:57.388 Message: lib/mempool: Defining dependency "mempool" 00:01:57.388 Message: lib/mbuf: Defining dependency "mbuf" 00:01:57.388 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:57.388 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:57.388 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:57.388 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:57.388 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:57.388 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:57.388 Compiler for C supports arguments -mpclmul: YES 00:01:57.388 Compiler for C supports arguments -maes: YES 00:01:57.388 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:57.388 Compiler for C supports arguments -mavx512bw: YES 00:01:57.388 Compiler for C supports arguments -mavx512dq: YES 00:01:57.388 Compiler for C supports arguments -mavx512vl: YES 00:01:57.388 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:57.388 Compiler for C supports arguments -mavx2: YES 00:01:57.388 Compiler for C supports arguments -mavx: YES 00:01:57.388 Message: lib/net: Defining dependency "net" 00:01:57.388 Message: lib/meter: Defining dependency "meter" 00:01:57.388 Message: lib/ethdev: Defining dependency "ethdev" 00:01:57.388 Message: lib/pci: Defining dependency "pci" 00:01:57.388 Message: lib/cmdline: Defining dependency "cmdline" 00:01:57.388 Message: lib/hash: Defining dependency "hash" 00:01:57.388 Message: lib/timer: Defining dependency "timer" 00:01:57.388 Message: lib/compressdev: Defining dependency "compressdev" 00:01:57.388 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:57.388 Message: lib/dmadev: Defining dependency "dmadev" 00:01:57.388 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:57.388 Message: lib/power: Defining dependency "power" 00:01:57.388 Message: lib/reorder: Defining dependency "reorder" 00:01:57.388 Message: lib/security: Defining dependency "security" 00:01:57.388 Has header "linux/userfaultfd.h" : YES 00:01:57.388 Has header "linux/vduse.h" : YES 00:01:57.388 Message: lib/vhost: Defining dependency "vhost" 00:01:57.388 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:57.388 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:57.388 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:57.388 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:57.388 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:57.388 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:57.388 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:57.388 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:57.388 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:57.388 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:57.388 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:57.388 Configuring doxy-api-html.conf using configuration 00:01:57.388 Configuring doxy-api-man.conf using configuration 00:01:57.388 Program mandb found: YES (/usr/bin/mandb) 00:01:57.388 Program sphinx-build found: NO 00:01:57.388 Configuring rte_build_config.h using configuration 00:01:57.388 Message: 00:01:57.388 ================= 00:01:57.388 Applications Enabled 00:01:57.388 ================= 00:01:57.388 00:01:57.388 apps: 00:01:57.388 00:01:57.388 00:01:57.388 Message: 00:01:57.388 ================= 00:01:57.388 Libraries Enabled 00:01:57.388 ================= 00:01:57.388 00:01:57.388 libs: 00:01:57.388 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:57.388 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:57.388 cryptodev, dmadev, power, reorder, security, vhost, 00:01:57.388 00:01:57.388 Message: 00:01:57.388 =============== 00:01:57.388 Drivers Enabled 00:01:57.388 =============== 00:01:57.388 00:01:57.388 common: 00:01:57.388 00:01:57.388 bus: 00:01:57.388 pci, vdev, 00:01:57.388 mempool: 00:01:57.388 ring, 00:01:57.388 dma: 00:01:57.388 00:01:57.388 net: 00:01:57.388 00:01:57.388 crypto: 00:01:57.388 00:01:57.388 compress: 00:01:57.388 00:01:57.388 vdpa: 00:01:57.388 00:01:57.388 00:01:57.388 Message: 00:01:57.388 ================= 00:01:57.388 Content Skipped 00:01:57.388 ================= 00:01:57.388 00:01:57.388 apps: 00:01:57.388 dumpcap: explicitly disabled via build config 00:01:57.388 graph: explicitly disabled via build config 00:01:57.388 pdump: explicitly disabled via build config 00:01:57.388 proc-info: explicitly disabled via build config 00:01:57.388 test-acl: explicitly disabled via build config 00:01:57.388 test-bbdev: explicitly disabled via build config 00:01:57.388 test-cmdline: explicitly disabled via build config 00:01:57.388 test-compress-perf: explicitly disabled via build config 00:01:57.388 test-crypto-perf: explicitly disabled via build config 00:01:57.389 test-dma-perf: explicitly disabled via build config 00:01:57.389 test-eventdev: explicitly disabled via build config 00:01:57.389 test-fib: explicitly disabled via build config 00:01:57.389 test-flow-perf: explicitly disabled via build config 00:01:57.389 test-gpudev: explicitly disabled via build config 00:01:57.389 test-mldev: explicitly disabled via build config 00:01:57.389 test-pipeline: explicitly disabled via build config 00:01:57.389 test-pmd: explicitly disabled via build config 00:01:57.389 test-regex: explicitly disabled via build config 00:01:57.389 test-sad: explicitly disabled via build config 00:01:57.389 test-security-perf: explicitly disabled via build config 00:01:57.389 00:01:57.389 libs: 00:01:57.389 argparse: explicitly disabled via build config 00:01:57.389 metrics: explicitly disabled via build config 00:01:57.389 acl: explicitly disabled via build config 00:01:57.389 bbdev: explicitly disabled via build config 00:01:57.389 bitratestats: explicitly disabled via build config 00:01:57.389 bpf: explicitly disabled via build config 00:01:57.389 cfgfile: explicitly disabled via build config 00:01:57.389 distributor: explicitly disabled via build config 00:01:57.389 efd: explicitly disabled via build config 00:01:57.389 eventdev: explicitly disabled via build config 00:01:57.389 dispatcher: explicitly disabled via build config 00:01:57.389 gpudev: explicitly disabled via build config 00:01:57.389 gro: explicitly disabled via build config 00:01:57.389 gso: explicitly disabled via build config 00:01:57.389 ip_frag: explicitly disabled via build config 00:01:57.389 jobstats: explicitly disabled via build config 00:01:57.389 latencystats: explicitly disabled via build config 00:01:57.389 lpm: explicitly disabled via build config 00:01:57.389 member: explicitly disabled via build config 00:01:57.389 pcapng: explicitly disabled via build config 00:01:57.389 rawdev: explicitly disabled via build config 00:01:57.389 regexdev: explicitly disabled via build config 00:01:57.389 mldev: explicitly disabled via build config 00:01:57.389 rib: explicitly disabled via build config 00:01:57.389 sched: explicitly disabled via build config 00:01:57.389 stack: explicitly disabled via build config 00:01:57.389 ipsec: explicitly disabled via build config 00:01:57.389 pdcp: explicitly disabled via build config 00:01:57.389 fib: explicitly disabled via build config 00:01:57.389 port: explicitly disabled via build config 00:01:57.389 pdump: explicitly disabled via build config 00:01:57.389 table: explicitly disabled via build config 00:01:57.389 pipeline: explicitly disabled via build config 00:01:57.389 graph: explicitly disabled via build config 00:01:57.389 node: explicitly disabled via build config 00:01:57.389 00:01:57.389 drivers: 00:01:57.389 common/cpt: not in enabled drivers build config 00:01:57.389 common/dpaax: not in enabled drivers build config 00:01:57.389 common/iavf: not in enabled drivers build config 00:01:57.389 common/idpf: not in enabled drivers build config 00:01:57.389 common/ionic: not in enabled drivers build config 00:01:57.389 common/mvep: not in enabled drivers build config 00:01:57.389 common/octeontx: not in enabled drivers build config 00:01:57.389 bus/auxiliary: not in enabled drivers build config 00:01:57.389 bus/cdx: not in enabled drivers build config 00:01:57.389 bus/dpaa: not in enabled drivers build config 00:01:57.389 bus/fslmc: not in enabled drivers build config 00:01:57.389 bus/ifpga: not in enabled drivers build config 00:01:57.389 bus/platform: not in enabled drivers build config 00:01:57.389 bus/uacce: not in enabled drivers build config 00:01:57.389 bus/vmbus: not in enabled drivers build config 00:01:57.389 common/cnxk: not in enabled drivers build config 00:01:57.389 common/mlx5: not in enabled drivers build config 00:01:57.389 common/nfp: not in enabled drivers build config 00:01:57.389 common/nitrox: not in enabled drivers build config 00:01:57.389 common/qat: not in enabled drivers build config 00:01:57.389 common/sfc_efx: not in enabled drivers build config 00:01:57.389 mempool/bucket: not in enabled drivers build config 00:01:57.389 mempool/cnxk: not in enabled drivers build config 00:01:57.389 mempool/dpaa: not in enabled drivers build config 00:01:57.389 mempool/dpaa2: not in enabled drivers build config 00:01:57.389 mempool/octeontx: not in enabled drivers build config 00:01:57.389 mempool/stack: not in enabled drivers build config 00:01:57.389 dma/cnxk: not in enabled drivers build config 00:01:57.389 dma/dpaa: not in enabled drivers build config 00:01:57.389 dma/dpaa2: not in enabled drivers build config 00:01:57.389 dma/hisilicon: not in enabled drivers build config 00:01:57.389 dma/idxd: not in enabled drivers build config 00:01:57.389 dma/ioat: not in enabled drivers build config 00:01:57.389 dma/skeleton: not in enabled drivers build config 00:01:57.389 net/af_packet: not in enabled drivers build config 00:01:57.389 net/af_xdp: not in enabled drivers build config 00:01:57.389 net/ark: not in enabled drivers build config 00:01:57.389 net/atlantic: not in enabled drivers build config 00:01:57.389 net/avp: not in enabled drivers build config 00:01:57.389 net/axgbe: not in enabled drivers build config 00:01:57.389 net/bnx2x: not in enabled drivers build config 00:01:57.389 net/bnxt: not in enabled drivers build config 00:01:57.389 net/bonding: not in enabled drivers build config 00:01:57.389 net/cnxk: not in enabled drivers build config 00:01:57.389 net/cpfl: not in enabled drivers build config 00:01:57.389 net/cxgbe: not in enabled drivers build config 00:01:57.389 net/dpaa: not in enabled drivers build config 00:01:57.389 net/dpaa2: not in enabled drivers build config 00:01:57.389 net/e1000: not in enabled drivers build config 00:01:57.389 net/ena: not in enabled drivers build config 00:01:57.389 net/enetc: not in enabled drivers build config 00:01:57.389 net/enetfec: not in enabled drivers build config 00:01:57.389 net/enic: not in enabled drivers build config 00:01:57.389 net/failsafe: not in enabled drivers build config 00:01:57.389 net/fm10k: not in enabled drivers build config 00:01:57.389 net/gve: not in enabled drivers build config 00:01:57.389 net/hinic: not in enabled drivers build config 00:01:57.389 net/hns3: not in enabled drivers build config 00:01:57.389 net/i40e: not in enabled drivers build config 00:01:57.389 net/iavf: not in enabled drivers build config 00:01:57.389 net/ice: not in enabled drivers build config 00:01:57.389 net/idpf: not in enabled drivers build config 00:01:57.389 net/igc: not in enabled drivers build config 00:01:57.390 net/ionic: not in enabled drivers build config 00:01:57.390 net/ipn3ke: not in enabled drivers build config 00:01:57.390 net/ixgbe: not in enabled drivers build config 00:01:57.390 net/mana: not in enabled drivers build config 00:01:57.390 net/memif: not in enabled drivers build config 00:01:57.390 net/mlx4: not in enabled drivers build config 00:01:57.390 net/mlx5: not in enabled drivers build config 00:01:57.390 net/mvneta: not in enabled drivers build config 00:01:57.390 net/mvpp2: not in enabled drivers build config 00:01:57.390 net/netvsc: not in enabled drivers build config 00:01:57.390 net/nfb: not in enabled drivers build config 00:01:57.390 net/nfp: not in enabled drivers build config 00:01:57.390 net/ngbe: not in enabled drivers build config 00:01:57.390 net/null: not in enabled drivers build config 00:01:57.390 net/octeontx: not in enabled drivers build config 00:01:57.390 net/octeon_ep: not in enabled drivers build config 00:01:57.390 net/pcap: not in enabled drivers build config 00:01:57.390 net/pfe: not in enabled drivers build config 00:01:57.390 net/qede: not in enabled drivers build config 00:01:57.390 net/ring: not in enabled drivers build config 00:01:57.390 net/sfc: not in enabled drivers build config 00:01:57.390 net/softnic: not in enabled drivers build config 00:01:57.390 net/tap: not in enabled drivers build config 00:01:57.390 net/thunderx: not in enabled drivers build config 00:01:57.390 net/txgbe: not in enabled drivers build config 00:01:57.390 net/vdev_netvsc: not in enabled drivers build config 00:01:57.390 net/vhost: not in enabled drivers build config 00:01:57.390 net/virtio: not in enabled drivers build config 00:01:57.390 net/vmxnet3: not in enabled drivers build config 00:01:57.390 raw/*: missing internal dependency, "rawdev" 00:01:57.390 crypto/armv8: not in enabled drivers build config 00:01:57.390 crypto/bcmfs: not in enabled drivers build config 00:01:57.390 crypto/caam_jr: not in enabled drivers build config 00:01:57.390 crypto/ccp: not in enabled drivers build config 00:01:57.390 crypto/cnxk: not in enabled drivers build config 00:01:57.390 crypto/dpaa_sec: not in enabled drivers build config 00:01:57.390 crypto/dpaa2_sec: not in enabled drivers build config 00:01:57.390 crypto/ipsec_mb: not in enabled drivers build config 00:01:57.390 crypto/mlx5: not in enabled drivers build config 00:01:57.390 crypto/mvsam: not in enabled drivers build config 00:01:57.390 crypto/nitrox: not in enabled drivers build config 00:01:57.390 crypto/null: not in enabled drivers build config 00:01:57.390 crypto/octeontx: not in enabled drivers build config 00:01:57.390 crypto/openssl: not in enabled drivers build config 00:01:57.390 crypto/scheduler: not in enabled drivers build config 00:01:57.390 crypto/uadk: not in enabled drivers build config 00:01:57.390 crypto/virtio: not in enabled drivers build config 00:01:57.390 compress/isal: not in enabled drivers build config 00:01:57.390 compress/mlx5: not in enabled drivers build config 00:01:57.390 compress/nitrox: not in enabled drivers build config 00:01:57.390 compress/octeontx: not in enabled drivers build config 00:01:57.390 compress/zlib: not in enabled drivers build config 00:01:57.390 regex/*: missing internal dependency, "regexdev" 00:01:57.390 ml/*: missing internal dependency, "mldev" 00:01:57.390 vdpa/ifc: not in enabled drivers build config 00:01:57.390 vdpa/mlx5: not in enabled drivers build config 00:01:57.390 vdpa/nfp: not in enabled drivers build config 00:01:57.390 vdpa/sfc: not in enabled drivers build config 00:01:57.390 event/*: missing internal dependency, "eventdev" 00:01:57.390 baseband/*: missing internal dependency, "bbdev" 00:01:57.390 gpu/*: missing internal dependency, "gpudev" 00:01:57.390 00:01:57.390 00:01:57.390 Build targets in project: 85 00:01:57.390 00:01:57.390 DPDK 24.03.0 00:01:57.390 00:01:57.390 User defined options 00:01:57.390 buildtype : debug 00:01:57.390 default_library : shared 00:01:57.390 libdir : lib 00:01:57.390 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:57.390 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:57.390 c_link_args : 00:01:57.390 cpu_instruction_set: native 00:01:57.390 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:57.390 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:57.390 enable_docs : false 00:01:57.390 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:57.390 enable_kmods : false 00:01:57.390 max_lcores : 128 00:01:57.390 tests : false 00:01:57.390 00:01:57.390 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.656 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:57.656 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:57.656 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:57.656 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:57.656 [4/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:57.656 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:57.919 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:57.919 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:57.919 [8/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:57.919 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:57.919 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:57.919 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:57.919 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:57.919 [13/268] Linking static target lib/librte_kvargs.a 00:01:57.919 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:57.919 [15/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:57.919 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:57.919 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:57.919 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:57.919 [19/268] Linking static target lib/librte_log.a 00:01:57.919 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:57.919 [21/268] Linking static target lib/librte_pci.a 00:01:57.919 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:57.919 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:58.242 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:58.242 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:58.242 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:58.242 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:58.242 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:58.242 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:58.242 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:58.242 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:58.242 [32/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:58.242 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:58.242 [34/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:58.242 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:58.242 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:58.242 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:58.242 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:58.242 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:58.242 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:58.242 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:58.242 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:58.242 [43/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:58.242 [44/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:58.242 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:58.242 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:58.242 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:58.242 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:58.242 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:58.242 [50/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:58.242 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:58.242 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:58.242 [53/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:58.242 [54/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:58.242 [55/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:58.242 [56/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:58.242 [57/268] Linking static target lib/librte_meter.a 00:01:58.242 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:58.242 [59/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:58.503 [60/268] Linking static target lib/librte_telemetry.a 00:01:58.503 [61/268] Linking static target lib/librte_ring.a 00:01:58.503 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:58.503 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:58.503 [64/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:58.503 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:58.503 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:58.503 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:58.503 [68/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:58.503 [69/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.503 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:58.503 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:58.503 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:58.503 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:58.503 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:58.503 [75/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:58.503 [76/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:58.503 [77/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:58.503 [78/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:58.503 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:58.503 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:58.503 [81/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:58.503 [82/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:58.503 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:58.503 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:58.503 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:58.503 [86/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:58.503 [87/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:58.503 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:58.503 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:58.503 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:58.503 [91/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:58.503 [92/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:58.503 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:58.503 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:58.503 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:58.503 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:58.503 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:58.503 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:58.503 [99/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:58.503 [100/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:58.503 [101/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:58.503 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:58.503 [103/268] Linking static target lib/librte_rcu.a 00:01:58.503 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:58.503 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:58.503 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:58.503 [107/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:58.503 [108/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:58.503 [109/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:58.504 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:58.504 [111/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.504 [112/268] Linking static target lib/librte_mempool.a 00:01:58.504 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:58.504 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:58.504 [115/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:58.504 [116/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:58.504 [117/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:58.504 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:58.504 [119/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:58.504 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:58.504 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:58.504 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:58.504 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:58.504 [124/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:58.504 [125/268] Linking static target lib/librte_net.a 00:01:58.504 [126/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:58.762 [127/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.762 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:58.762 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:58.762 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:58.762 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:58.762 [132/268] Linking static target lib/librte_cmdline.a 00:01:58.762 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:58.762 [134/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:58.762 [135/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:58.762 [136/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:58.762 [137/268] Linking static target lib/librte_eal.a 00:01:58.762 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.762 [139/268] Linking static target lib/librte_timer.a 00:01:58.762 [140/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.762 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:58.762 [142/268] Linking static target lib/librte_mbuf.a 00:01:58.762 [143/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:58.762 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:58.762 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:58.762 [146/268] Linking target lib/librte_log.so.24.1 00:01:58.762 [147/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.762 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:58.762 [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:58.762 [150/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:58.762 [151/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:58.762 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:58.762 [153/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:58.762 [154/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:58.762 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:58.762 [156/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:58.762 [157/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:58.762 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:58.762 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:58.762 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:59.022 [161/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.022 [162/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:59.022 [163/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:59.022 [164/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.022 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:59.022 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:59.022 [167/268] Linking static target lib/librte_dmadev.a 00:01:59.022 [168/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:59.022 [169/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:59.022 [170/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:59.022 [171/268] Linking target lib/librte_kvargs.so.24.1 00:01:59.022 [172/268] Linking target lib/librte_telemetry.so.24.1 00:01:59.022 [173/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:59.022 [174/268] Linking static target lib/librte_compressdev.a 00:01:59.022 [175/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:59.022 [176/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:59.022 [177/268] Linking static target lib/librte_power.a 00:01:59.022 [178/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:59.022 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:59.022 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:59.022 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:59.022 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:59.022 [183/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:59.022 [184/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:59.022 [185/268] Linking static target lib/librte_security.a 00:01:59.022 [186/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:59.022 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:59.022 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:59.022 [189/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:59.022 [190/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:59.022 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:59.022 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:59.022 [193/268] Linking static target lib/librte_reorder.a 00:01:59.022 [194/268] Linking static target lib/librte_hash.a 00:01:59.022 [195/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:59.022 [196/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:59.022 [197/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:59.022 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:59.281 [199/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.281 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:59.281 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.281 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.281 [203/268] Linking static target drivers/librte_bus_vdev.a 00:01:59.281 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:59.281 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.281 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.281 [207/268] Linking static target drivers/librte_bus_pci.a 00:01:59.281 [208/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.281 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:59.281 [210/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:59.281 [211/268] Linking static target lib/librte_cryptodev.a 00:01:59.281 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.281 [213/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.281 [214/268] Linking static target drivers/librte_mempool_ring.a 00:01:59.540 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.540 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.540 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.540 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.540 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.799 [220/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.799 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:59.799 [222/268] Linking static target lib/librte_ethdev.a 00:01:59.799 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.799 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:59.799 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.057 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.057 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.992 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:00.992 [229/268] Linking static target lib/librte_vhost.a 00:02:01.250 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.153 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.432 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.691 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.950 [234/268] Linking target lib/librte_eal.so.24.1 00:02:08.950 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:08.950 [236/268] Linking target lib/librte_meter.so.24.1 00:02:08.950 [237/268] Linking target lib/librte_ring.so.24.1 00:02:08.950 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:08.950 [239/268] Linking target lib/librte_pci.so.24.1 00:02:08.950 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:08.950 [241/268] Linking target lib/librte_timer.so.24.1 00:02:09.209 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:09.209 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:09.209 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:09.209 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:09.209 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:09.209 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:09.209 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:09.209 [249/268] Linking target lib/librte_rcu.so.24.1 00:02:09.209 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:09.467 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:09.467 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:09.467 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:09.467 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:09.467 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:09.467 [256/268] Linking target lib/librte_net.so.24.1 00:02:09.467 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:09.467 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:09.726 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:09.726 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:09.726 [261/268] Linking target lib/librte_security.so.24.1 00:02:09.726 [262/268] Linking target lib/librte_hash.so.24.1 00:02:09.726 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:09.726 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:09.985 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:09.985 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:09.985 [267/268] Linking target lib/librte_power.so.24.1 00:02:09.985 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:09.985 INFO: autodetecting backend as ninja 00:02:09.985 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:19.965 CC lib/log/log.o 00:02:19.965 CC lib/log/log_flags.o 00:02:19.965 CC lib/log/log_deprecated.o 00:02:19.965 CC lib/ut_mock/mock.o 00:02:19.965 CC lib/ut/ut.o 00:02:19.965 LIB libspdk_log.a 00:02:19.965 LIB libspdk_ut.a 00:02:19.965 LIB libspdk_ut_mock.a 00:02:19.965 SO libspdk_ut.so.2.0 00:02:19.965 SO libspdk_log.so.7.1 00:02:19.965 SO libspdk_ut_mock.so.6.0 00:02:19.965 SYMLINK libspdk_ut.so 00:02:19.965 SYMLINK libspdk_ut_mock.so 00:02:19.965 SYMLINK libspdk_log.so 00:02:20.533 CC lib/dma/dma.o 00:02:20.533 CC lib/ioat/ioat.o 00:02:20.533 CC lib/util/base64.o 00:02:20.533 CC lib/util/bit_array.o 00:02:20.533 CC lib/util/cpuset.o 00:02:20.533 CC lib/util/crc16.o 00:02:20.533 CXX lib/trace_parser/trace.o 00:02:20.533 CC lib/util/crc32.o 00:02:20.533 CC lib/util/crc32c.o 00:02:20.533 CC lib/util/crc32_ieee.o 00:02:20.533 CC lib/util/crc64.o 00:02:20.533 CC lib/util/dif.o 00:02:20.533 CC lib/util/fd.o 00:02:20.533 CC lib/util/fd_group.o 00:02:20.533 CC lib/util/file.o 00:02:20.533 CC lib/util/hexlify.o 00:02:20.533 CC lib/util/iov.o 00:02:20.533 CC lib/util/math.o 00:02:20.533 CC lib/util/net.o 00:02:20.533 CC lib/util/pipe.o 00:02:20.533 CC lib/util/strerror_tls.o 00:02:20.533 CC lib/util/string.o 00:02:20.533 CC lib/util/uuid.o 00:02:20.533 CC lib/util/xor.o 00:02:20.533 CC lib/util/zipf.o 00:02:20.533 CC lib/util/md5.o 00:02:20.533 CC lib/vfio_user/host/vfio_user.o 00:02:20.533 CC lib/vfio_user/host/vfio_user_pci.o 00:02:20.792 LIB libspdk_dma.a 00:02:20.792 SO libspdk_dma.so.5.0 00:02:20.792 LIB libspdk_ioat.a 00:02:20.792 SYMLINK libspdk_dma.so 00:02:20.792 SO libspdk_ioat.so.7.0 00:02:20.792 SYMLINK libspdk_ioat.so 00:02:20.792 LIB libspdk_vfio_user.a 00:02:20.792 SO libspdk_vfio_user.so.5.0 00:02:21.051 SYMLINK libspdk_vfio_user.so 00:02:21.051 LIB libspdk_util.a 00:02:21.052 SO libspdk_util.so.10.1 00:02:21.052 SYMLINK libspdk_util.so 00:02:21.052 LIB libspdk_trace_parser.a 00:02:21.311 SO libspdk_trace_parser.so.6.0 00:02:21.311 SYMLINK libspdk_trace_parser.so 00:02:21.569 CC lib/conf/conf.o 00:02:21.569 CC lib/json/json_parse.o 00:02:21.569 CC lib/json/json_util.o 00:02:21.569 CC lib/env_dpdk/env.o 00:02:21.569 CC lib/json/json_write.o 00:02:21.569 CC lib/env_dpdk/memory.o 00:02:21.569 CC lib/vmd/vmd.o 00:02:21.569 CC lib/env_dpdk/pci.o 00:02:21.569 CC lib/rdma_utils/rdma_utils.o 00:02:21.569 CC lib/vmd/led.o 00:02:21.569 CC lib/env_dpdk/init.o 00:02:21.569 CC lib/env_dpdk/threads.o 00:02:21.569 CC lib/env_dpdk/pci_ioat.o 00:02:21.569 CC lib/env_dpdk/pci_virtio.o 00:02:21.569 CC lib/env_dpdk/pci_vmd.o 00:02:21.569 CC lib/env_dpdk/pci_idxd.o 00:02:21.569 CC lib/env_dpdk/pci_event.o 00:02:21.569 CC lib/idxd/idxd.o 00:02:21.569 CC lib/env_dpdk/sigbus_handler.o 00:02:21.569 CC lib/idxd/idxd_user.o 00:02:21.569 CC lib/env_dpdk/pci_dpdk.o 00:02:21.569 CC lib/idxd/idxd_kernel.o 00:02:21.569 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:21.569 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:21.828 LIB libspdk_rdma_utils.a 00:02:21.828 LIB libspdk_conf.a 00:02:21.828 SO libspdk_rdma_utils.so.1.0 00:02:21.828 SO libspdk_conf.so.6.0 00:02:21.828 LIB libspdk_json.a 00:02:21.828 SYMLINK libspdk_conf.so 00:02:21.828 SYMLINK libspdk_rdma_utils.so 00:02:21.828 SO libspdk_json.so.6.0 00:02:21.828 SYMLINK libspdk_json.so 00:02:22.087 LIB libspdk_idxd.a 00:02:22.087 LIB libspdk_vmd.a 00:02:22.087 SO libspdk_idxd.so.12.1 00:02:22.087 SO libspdk_vmd.so.6.0 00:02:22.087 SYMLINK libspdk_idxd.so 00:02:22.087 SYMLINK libspdk_vmd.so 00:02:22.087 CC lib/rdma_provider/common.o 00:02:22.087 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:22.087 CC lib/jsonrpc/jsonrpc_server.o 00:02:22.087 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:22.087 CC lib/jsonrpc/jsonrpc_client.o 00:02:22.087 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:22.346 LIB libspdk_rdma_provider.a 00:02:22.346 SO libspdk_rdma_provider.so.7.0 00:02:22.346 LIB libspdk_jsonrpc.a 00:02:22.346 SYMLINK libspdk_rdma_provider.so 00:02:22.346 SO libspdk_jsonrpc.so.6.0 00:02:22.604 SYMLINK libspdk_jsonrpc.so 00:02:22.604 LIB libspdk_env_dpdk.a 00:02:22.604 SO libspdk_env_dpdk.so.15.1 00:02:22.604 SYMLINK libspdk_env_dpdk.so 00:02:22.863 CC lib/rpc/rpc.o 00:02:23.122 LIB libspdk_rpc.a 00:02:23.122 SO libspdk_rpc.so.6.0 00:02:23.122 SYMLINK libspdk_rpc.so 00:02:23.382 CC lib/notify/notify.o 00:02:23.382 CC lib/keyring/keyring.o 00:02:23.382 CC lib/notify/notify_rpc.o 00:02:23.382 CC lib/keyring/keyring_rpc.o 00:02:23.382 CC lib/trace/trace.o 00:02:23.382 CC lib/trace/trace_flags.o 00:02:23.382 CC lib/trace/trace_rpc.o 00:02:23.641 LIB libspdk_notify.a 00:02:23.641 SO libspdk_notify.so.6.0 00:02:23.641 LIB libspdk_keyring.a 00:02:23.641 LIB libspdk_trace.a 00:02:23.641 SO libspdk_keyring.so.2.0 00:02:23.641 SYMLINK libspdk_notify.so 00:02:23.641 SO libspdk_trace.so.11.0 00:02:23.641 SYMLINK libspdk_keyring.so 00:02:23.900 SYMLINK libspdk_trace.so 00:02:24.158 CC lib/thread/thread.o 00:02:24.158 CC lib/thread/iobuf.o 00:02:24.158 CC lib/sock/sock.o 00:02:24.158 CC lib/sock/sock_rpc.o 00:02:24.418 LIB libspdk_sock.a 00:02:24.418 SO libspdk_sock.so.10.0 00:02:24.676 SYMLINK libspdk_sock.so 00:02:24.935 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:24.935 CC lib/nvme/nvme_ctrlr.o 00:02:24.935 CC lib/nvme/nvme_fabric.o 00:02:24.935 CC lib/nvme/nvme_ns_cmd.o 00:02:24.935 CC lib/nvme/nvme_ns.o 00:02:24.935 CC lib/nvme/nvme_pcie_common.o 00:02:24.935 CC lib/nvme/nvme_pcie.o 00:02:24.935 CC lib/nvme/nvme_qpair.o 00:02:24.935 CC lib/nvme/nvme.o 00:02:24.935 CC lib/nvme/nvme_quirks.o 00:02:24.935 CC lib/nvme/nvme_transport.o 00:02:24.935 CC lib/nvme/nvme_discovery.o 00:02:24.935 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:24.935 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:24.935 CC lib/nvme/nvme_tcp.o 00:02:24.935 CC lib/nvme/nvme_opal.o 00:02:24.935 CC lib/nvme/nvme_io_msg.o 00:02:24.935 CC lib/nvme/nvme_poll_group.o 00:02:24.935 CC lib/nvme/nvme_zns.o 00:02:24.935 CC lib/nvme/nvme_stubs.o 00:02:24.935 CC lib/nvme/nvme_auth.o 00:02:24.935 CC lib/nvme/nvme_cuse.o 00:02:24.935 CC lib/nvme/nvme_vfio_user.o 00:02:24.935 CC lib/nvme/nvme_rdma.o 00:02:25.194 LIB libspdk_thread.a 00:02:25.194 SO libspdk_thread.so.11.0 00:02:25.452 SYMLINK libspdk_thread.so 00:02:25.718 CC lib/virtio/virtio.o 00:02:25.718 CC lib/virtio/virtio_vhost_user.o 00:02:25.718 CC lib/virtio/virtio_vfio_user.o 00:02:25.718 CC lib/virtio/virtio_pci.o 00:02:25.718 CC lib/accel/accel.o 00:02:25.718 CC lib/accel/accel_rpc.o 00:02:25.718 CC lib/accel/accel_sw.o 00:02:25.718 CC lib/vfu_tgt/tgt_endpoint.o 00:02:25.718 CC lib/vfu_tgt/tgt_rpc.o 00:02:25.718 CC lib/fsdev/fsdev.o 00:02:25.718 CC lib/fsdev/fsdev_io.o 00:02:25.718 CC lib/init/json_config.o 00:02:25.718 CC lib/fsdev/fsdev_rpc.o 00:02:25.718 CC lib/init/subsystem.o 00:02:25.718 CC lib/init/subsystem_rpc.o 00:02:25.718 CC lib/init/rpc.o 00:02:25.718 CC lib/blob/zeroes.o 00:02:25.718 CC lib/blob/blobstore.o 00:02:25.718 CC lib/blob/request.o 00:02:25.718 CC lib/blob/blob_bs_dev.o 00:02:25.977 LIB libspdk_init.a 00:02:25.977 SO libspdk_init.so.6.0 00:02:25.977 LIB libspdk_virtio.a 00:02:25.977 LIB libspdk_vfu_tgt.a 00:02:25.977 SO libspdk_virtio.so.7.0 00:02:25.977 SO libspdk_vfu_tgt.so.3.0 00:02:25.977 SYMLINK libspdk_init.so 00:02:25.977 SYMLINK libspdk_virtio.so 00:02:25.977 SYMLINK libspdk_vfu_tgt.so 00:02:26.236 LIB libspdk_fsdev.a 00:02:26.236 SO libspdk_fsdev.so.2.0 00:02:26.236 SYMLINK libspdk_fsdev.so 00:02:26.494 CC lib/event/app.o 00:02:26.494 CC lib/event/reactor.o 00:02:26.494 CC lib/event/log_rpc.o 00:02:26.494 CC lib/event/app_rpc.o 00:02:26.494 CC lib/event/scheduler_static.o 00:02:26.494 LIB libspdk_accel.a 00:02:26.494 SO libspdk_accel.so.16.0 00:02:26.494 LIB libspdk_nvme.a 00:02:26.494 SYMLINK libspdk_accel.so 00:02:26.753 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:26.753 SO libspdk_nvme.so.15.0 00:02:26.753 LIB libspdk_event.a 00:02:26.753 SO libspdk_event.so.14.0 00:02:26.753 SYMLINK libspdk_event.so 00:02:27.012 SYMLINK libspdk_nvme.so 00:02:27.012 CC lib/bdev/bdev.o 00:02:27.012 CC lib/bdev/bdev_rpc.o 00:02:27.012 CC lib/bdev/bdev_zone.o 00:02:27.012 CC lib/bdev/part.o 00:02:27.012 CC lib/bdev/scsi_nvme.o 00:02:27.271 LIB libspdk_fuse_dispatcher.a 00:02:27.271 SO libspdk_fuse_dispatcher.so.1.0 00:02:27.271 SYMLINK libspdk_fuse_dispatcher.so 00:02:27.839 LIB libspdk_blob.a 00:02:27.839 SO libspdk_blob.so.12.0 00:02:28.098 SYMLINK libspdk_blob.so 00:02:28.357 CC lib/blobfs/blobfs.o 00:02:28.357 CC lib/blobfs/tree.o 00:02:28.357 CC lib/lvol/lvol.o 00:02:28.925 LIB libspdk_bdev.a 00:02:28.925 SO libspdk_bdev.so.17.0 00:02:28.925 LIB libspdk_blobfs.a 00:02:28.925 SO libspdk_blobfs.so.11.0 00:02:28.925 SYMLINK libspdk_bdev.so 00:02:28.925 SYMLINK libspdk_blobfs.so 00:02:28.925 LIB libspdk_lvol.a 00:02:28.925 SO libspdk_lvol.so.11.0 00:02:29.184 SYMLINK libspdk_lvol.so 00:02:29.184 CC lib/scsi/dev.o 00:02:29.184 CC lib/scsi/lun.o 00:02:29.184 CC lib/scsi/port.o 00:02:29.184 CC lib/scsi/scsi.o 00:02:29.184 CC lib/scsi/scsi_bdev.o 00:02:29.184 CC lib/scsi/scsi_pr.o 00:02:29.184 CC lib/ublk/ublk.o 00:02:29.184 CC lib/scsi/scsi_rpc.o 00:02:29.184 CC lib/ublk/ublk_rpc.o 00:02:29.184 CC lib/scsi/task.o 00:02:29.184 CC lib/nbd/nbd.o 00:02:29.184 CC lib/nvmf/ctrlr.o 00:02:29.446 CC lib/nbd/nbd_rpc.o 00:02:29.446 CC lib/nvmf/ctrlr_discovery.o 00:02:29.446 CC lib/nvmf/ctrlr_bdev.o 00:02:29.446 CC lib/ftl/ftl_core.o 00:02:29.446 CC lib/nvmf/subsystem.o 00:02:29.446 CC lib/nvmf/nvmf.o 00:02:29.446 CC lib/ftl/ftl_init.o 00:02:29.446 CC lib/nvmf/nvmf_rpc.o 00:02:29.446 CC lib/ftl/ftl_layout.o 00:02:29.446 CC lib/nvmf/transport.o 00:02:29.446 CC lib/ftl/ftl_debug.o 00:02:29.446 CC lib/nvmf/tcp.o 00:02:29.446 CC lib/ftl/ftl_io.o 00:02:29.446 CC lib/nvmf/stubs.o 00:02:29.446 CC lib/ftl/ftl_sb.o 00:02:29.446 CC lib/nvmf/mdns_server.o 00:02:29.446 CC lib/ftl/ftl_l2p.o 00:02:29.446 CC lib/nvmf/vfio_user.o 00:02:29.446 CC lib/nvmf/rdma.o 00:02:29.446 CC lib/ftl/ftl_l2p_flat.o 00:02:29.446 CC lib/nvmf/auth.o 00:02:29.446 CC lib/ftl/ftl_nv_cache.o 00:02:29.446 CC lib/ftl/ftl_band.o 00:02:29.446 CC lib/ftl/ftl_band_ops.o 00:02:29.446 CC lib/ftl/ftl_writer.o 00:02:29.446 CC lib/ftl/ftl_rq.o 00:02:29.446 CC lib/ftl/ftl_l2p_cache.o 00:02:29.446 CC lib/ftl/ftl_reloc.o 00:02:29.446 CC lib/ftl/ftl_p2l.o 00:02:29.446 CC lib/ftl/ftl_p2l_log.o 00:02:29.446 CC lib/ftl/mngt/ftl_mngt.o 00:02:29.446 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:29.446 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:29.446 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:29.446 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:29.446 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:29.446 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:29.446 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:29.446 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:29.446 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:29.446 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:29.446 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:29.446 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:29.446 CC lib/ftl/utils/ftl_conf.o 00:02:29.446 CC lib/ftl/utils/ftl_mempool.o 00:02:29.446 CC lib/ftl/utils/ftl_md.o 00:02:29.446 CC lib/ftl/utils/ftl_bitmap.o 00:02:29.446 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:29.446 CC lib/ftl/utils/ftl_property.o 00:02:29.446 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:29.446 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:29.446 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:29.446 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:29.446 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:29.446 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:29.446 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:29.446 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:29.446 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:29.446 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:29.446 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:29.446 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:29.446 CC lib/ftl/base/ftl_base_dev.o 00:02:29.446 CC lib/ftl/ftl_trace.o 00:02:29.446 CC lib/ftl/base/ftl_base_bdev.o 00:02:30.033 LIB libspdk_scsi.a 00:02:30.033 LIB libspdk_nbd.a 00:02:30.033 SO libspdk_scsi.so.9.0 00:02:30.033 SO libspdk_nbd.so.7.0 00:02:30.376 SYMLINK libspdk_nbd.so 00:02:30.376 SYMLINK libspdk_scsi.so 00:02:30.376 LIB libspdk_ublk.a 00:02:30.376 SO libspdk_ublk.so.3.0 00:02:30.376 SYMLINK libspdk_ublk.so 00:02:30.376 LIB libspdk_ftl.a 00:02:30.634 CC lib/iscsi/conn.o 00:02:30.634 CC lib/iscsi/init_grp.o 00:02:30.634 CC lib/iscsi/iscsi.o 00:02:30.634 CC lib/iscsi/param.o 00:02:30.634 CC lib/vhost/vhost.o 00:02:30.634 CC lib/iscsi/portal_grp.o 00:02:30.634 CC lib/vhost/vhost_rpc.o 00:02:30.634 CC lib/iscsi/tgt_node.o 00:02:30.634 CC lib/vhost/vhost_scsi.o 00:02:30.634 CC lib/iscsi/iscsi_subsystem.o 00:02:30.634 CC lib/vhost/vhost_blk.o 00:02:30.634 SO libspdk_ftl.so.9.0 00:02:30.634 CC lib/iscsi/iscsi_rpc.o 00:02:30.635 CC lib/vhost/rte_vhost_user.o 00:02:30.635 CC lib/iscsi/task.o 00:02:30.635 SYMLINK libspdk_ftl.so 00:02:31.202 LIB libspdk_nvmf.a 00:02:31.202 SO libspdk_nvmf.so.20.0 00:02:31.202 SYMLINK libspdk_nvmf.so 00:02:31.461 LIB libspdk_vhost.a 00:02:31.461 SO libspdk_vhost.so.8.0 00:02:31.461 SYMLINK libspdk_vhost.so 00:02:31.461 LIB libspdk_iscsi.a 00:02:31.461 SO libspdk_iscsi.so.8.0 00:02:31.720 SYMLINK libspdk_iscsi.so 00:02:32.287 CC module/env_dpdk/env_dpdk_rpc.o 00:02:32.287 CC module/vfu_device/vfu_virtio.o 00:02:32.287 CC module/vfu_device/vfu_virtio_scsi.o 00:02:32.287 CC module/vfu_device/vfu_virtio_blk.o 00:02:32.287 CC module/vfu_device/vfu_virtio_rpc.o 00:02:32.287 CC module/vfu_device/vfu_virtio_fs.o 00:02:32.287 LIB libspdk_env_dpdk_rpc.a 00:02:32.287 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:32.287 CC module/sock/posix/posix.o 00:02:32.287 CC module/fsdev/aio/fsdev_aio.o 00:02:32.287 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:32.287 CC module/fsdev/aio/linux_aio_mgr.o 00:02:32.287 CC module/accel/iaa/accel_iaa.o 00:02:32.287 CC module/accel/iaa/accel_iaa_rpc.o 00:02:32.287 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:32.287 CC module/keyring/linux/keyring.o 00:02:32.287 CC module/keyring/linux/keyring_rpc.o 00:02:32.287 CC module/accel/ioat/accel_ioat.o 00:02:32.287 CC module/accel/ioat/accel_ioat_rpc.o 00:02:32.287 CC module/accel/dsa/accel_dsa.o 00:02:32.287 SO libspdk_env_dpdk_rpc.so.6.0 00:02:32.287 CC module/accel/dsa/accel_dsa_rpc.o 00:02:32.287 CC module/scheduler/gscheduler/gscheduler.o 00:02:32.287 CC module/accel/error/accel_error.o 00:02:32.287 CC module/accel/error/accel_error_rpc.o 00:02:32.287 CC module/blob/bdev/blob_bdev.o 00:02:32.287 CC module/keyring/file/keyring.o 00:02:32.287 CC module/keyring/file/keyring_rpc.o 00:02:32.545 SYMLINK libspdk_env_dpdk_rpc.so 00:02:32.545 LIB libspdk_keyring_linux.a 00:02:32.545 LIB libspdk_scheduler_gscheduler.a 00:02:32.545 LIB libspdk_scheduler_dpdk_governor.a 00:02:32.545 LIB libspdk_keyring_file.a 00:02:32.545 LIB libspdk_scheduler_dynamic.a 00:02:32.545 SO libspdk_keyring_linux.so.1.0 00:02:32.545 LIB libspdk_accel_iaa.a 00:02:32.545 SO libspdk_scheduler_dynamic.so.4.0 00:02:32.545 LIB libspdk_accel_error.a 00:02:32.545 LIB libspdk_accel_ioat.a 00:02:32.545 SO libspdk_scheduler_gscheduler.so.4.0 00:02:32.545 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:32.545 SO libspdk_keyring_file.so.2.0 00:02:32.545 SO libspdk_accel_error.so.2.0 00:02:32.545 SO libspdk_accel_iaa.so.3.0 00:02:32.545 SO libspdk_accel_ioat.so.6.0 00:02:32.545 SYMLINK libspdk_keyring_linux.so 00:02:32.804 SYMLINK libspdk_scheduler_dynamic.so 00:02:32.804 SYMLINK libspdk_scheduler_gscheduler.so 00:02:32.804 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:32.804 SYMLINK libspdk_keyring_file.so 00:02:32.804 LIB libspdk_accel_dsa.a 00:02:32.804 SYMLINK libspdk_accel_error.so 00:02:32.804 LIB libspdk_blob_bdev.a 00:02:32.804 SYMLINK libspdk_accel_iaa.so 00:02:32.804 SYMLINK libspdk_accel_ioat.so 00:02:32.804 SO libspdk_accel_dsa.so.5.0 00:02:32.804 SO libspdk_blob_bdev.so.12.0 00:02:32.804 LIB libspdk_vfu_device.a 00:02:32.804 SYMLINK libspdk_blob_bdev.so 00:02:32.804 SYMLINK libspdk_accel_dsa.so 00:02:32.804 SO libspdk_vfu_device.so.3.0 00:02:32.804 SYMLINK libspdk_vfu_device.so 00:02:33.062 LIB libspdk_fsdev_aio.a 00:02:33.062 SO libspdk_fsdev_aio.so.1.0 00:02:33.062 LIB libspdk_sock_posix.a 00:02:33.062 SO libspdk_sock_posix.so.6.0 00:02:33.062 SYMLINK libspdk_fsdev_aio.so 00:02:33.062 SYMLINK libspdk_sock_posix.so 00:02:33.321 CC module/bdev/gpt/vbdev_gpt.o 00:02:33.321 CC module/bdev/gpt/gpt.o 00:02:33.321 CC module/blobfs/bdev/blobfs_bdev.o 00:02:33.321 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:33.321 CC module/bdev/raid/bdev_raid.o 00:02:33.321 CC module/bdev/error/vbdev_error.o 00:02:33.321 CC module/bdev/raid/bdev_raid_sb.o 00:02:33.321 CC module/bdev/error/vbdev_error_rpc.o 00:02:33.321 CC module/bdev/raid/bdev_raid_rpc.o 00:02:33.321 CC module/bdev/raid/raid0.o 00:02:33.321 CC module/bdev/delay/vbdev_delay.o 00:02:33.321 CC module/bdev/raid/raid1.o 00:02:33.321 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:33.321 CC module/bdev/raid/concat.o 00:02:33.321 CC module/bdev/iscsi/bdev_iscsi.o 00:02:33.321 CC module/bdev/malloc/bdev_malloc.o 00:02:33.321 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:33.321 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:33.321 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:33.321 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:33.321 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:33.321 CC module/bdev/passthru/vbdev_passthru.o 00:02:33.321 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:33.321 CC module/bdev/null/bdev_null.o 00:02:33.321 CC module/bdev/split/vbdev_split.o 00:02:33.321 CC module/bdev/null/bdev_null_rpc.o 00:02:33.321 CC module/bdev/split/vbdev_split_rpc.o 00:02:33.321 CC module/bdev/lvol/vbdev_lvol.o 00:02:33.321 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:33.321 CC module/bdev/aio/bdev_aio_rpc.o 00:02:33.321 CC module/bdev/aio/bdev_aio.o 00:02:33.321 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:33.321 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:33.321 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:33.321 CC module/bdev/nvme/bdev_nvme.o 00:02:33.321 CC module/bdev/nvme/nvme_rpc.o 00:02:33.321 CC module/bdev/nvme/bdev_mdns_client.o 00:02:33.321 CC module/bdev/nvme/vbdev_opal.o 00:02:33.321 CC module/bdev/ftl/bdev_ftl.o 00:02:33.321 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:33.321 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:33.321 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:33.580 LIB libspdk_blobfs_bdev.a 00:02:33.580 SO libspdk_blobfs_bdev.so.6.0 00:02:33.580 LIB libspdk_bdev_split.a 00:02:33.580 LIB libspdk_bdev_gpt.a 00:02:33.580 SYMLINK libspdk_blobfs_bdev.so 00:02:33.580 LIB libspdk_bdev_error.a 00:02:33.580 SO libspdk_bdev_split.so.6.0 00:02:33.580 SO libspdk_bdev_gpt.so.6.0 00:02:33.580 LIB libspdk_bdev_null.a 00:02:33.580 LIB libspdk_bdev_ftl.a 00:02:33.580 SO libspdk_bdev_error.so.6.0 00:02:33.580 LIB libspdk_bdev_passthru.a 00:02:33.580 SO libspdk_bdev_ftl.so.6.0 00:02:33.580 LIB libspdk_bdev_aio.a 00:02:33.580 SO libspdk_bdev_null.so.6.0 00:02:33.839 LIB libspdk_bdev_delay.a 00:02:33.839 SYMLINK libspdk_bdev_split.so 00:02:33.839 SYMLINK libspdk_bdev_gpt.so 00:02:33.839 LIB libspdk_bdev_zone_block.a 00:02:33.839 LIB libspdk_bdev_malloc.a 00:02:33.839 SO libspdk_bdev_passthru.so.6.0 00:02:33.839 SO libspdk_bdev_aio.so.6.0 00:02:33.839 SYMLINK libspdk_bdev_error.so 00:02:33.839 SO libspdk_bdev_delay.so.6.0 00:02:33.839 LIB libspdk_bdev_iscsi.a 00:02:33.839 SYMLINK libspdk_bdev_ftl.so 00:02:33.839 SO libspdk_bdev_malloc.so.6.0 00:02:33.839 SYMLINK libspdk_bdev_null.so 00:02:33.839 SO libspdk_bdev_zone_block.so.6.0 00:02:33.839 SO libspdk_bdev_iscsi.so.6.0 00:02:33.839 SYMLINK libspdk_bdev_passthru.so 00:02:33.839 SYMLINK libspdk_bdev_aio.so 00:02:33.839 SYMLINK libspdk_bdev_delay.so 00:02:33.839 SYMLINK libspdk_bdev_malloc.so 00:02:33.839 SYMLINK libspdk_bdev_zone_block.so 00:02:33.839 LIB libspdk_bdev_lvol.a 00:02:33.839 SYMLINK libspdk_bdev_iscsi.so 00:02:33.839 SO libspdk_bdev_lvol.so.6.0 00:02:33.839 LIB libspdk_bdev_virtio.a 00:02:33.839 SO libspdk_bdev_virtio.so.6.0 00:02:33.839 SYMLINK libspdk_bdev_lvol.so 00:02:34.098 SYMLINK libspdk_bdev_virtio.so 00:02:34.098 LIB libspdk_bdev_raid.a 00:02:34.356 SO libspdk_bdev_raid.so.6.0 00:02:34.356 SYMLINK libspdk_bdev_raid.so 00:02:35.293 LIB libspdk_bdev_nvme.a 00:02:35.293 SO libspdk_bdev_nvme.so.7.1 00:02:35.293 SYMLINK libspdk_bdev_nvme.so 00:02:36.230 CC module/event/subsystems/vmd/vmd.o 00:02:36.230 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:36.230 CC module/event/subsystems/iobuf/iobuf.o 00:02:36.230 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:36.230 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:36.230 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:36.230 CC module/event/subsystems/fsdev/fsdev.o 00:02:36.230 CC module/event/subsystems/sock/sock.o 00:02:36.230 CC module/event/subsystems/scheduler/scheduler.o 00:02:36.230 CC module/event/subsystems/keyring/keyring.o 00:02:36.230 LIB libspdk_event_vfu_tgt.a 00:02:36.230 LIB libspdk_event_sock.a 00:02:36.230 LIB libspdk_event_iobuf.a 00:02:36.230 LIB libspdk_event_vmd.a 00:02:36.230 LIB libspdk_event_vhost_blk.a 00:02:36.230 LIB libspdk_event_keyring.a 00:02:36.230 SO libspdk_event_vfu_tgt.so.3.0 00:02:36.230 LIB libspdk_event_fsdev.a 00:02:36.230 LIB libspdk_event_scheduler.a 00:02:36.230 SO libspdk_event_sock.so.5.0 00:02:36.230 SO libspdk_event_vhost_blk.so.3.0 00:02:36.230 SO libspdk_event_iobuf.so.3.0 00:02:36.230 SO libspdk_event_vmd.so.6.0 00:02:36.230 SO libspdk_event_fsdev.so.1.0 00:02:36.230 SO libspdk_event_keyring.so.1.0 00:02:36.230 SO libspdk_event_scheduler.so.4.0 00:02:36.230 SYMLINK libspdk_event_vfu_tgt.so 00:02:36.230 SYMLINK libspdk_event_vhost_blk.so 00:02:36.230 SYMLINK libspdk_event_sock.so 00:02:36.230 SYMLINK libspdk_event_vmd.so 00:02:36.230 SYMLINK libspdk_event_fsdev.so 00:02:36.230 SYMLINK libspdk_event_iobuf.so 00:02:36.230 SYMLINK libspdk_event_keyring.so 00:02:36.230 SYMLINK libspdk_event_scheduler.so 00:02:36.797 CC module/event/subsystems/accel/accel.o 00:02:36.797 LIB libspdk_event_accel.a 00:02:36.797 SO libspdk_event_accel.so.6.0 00:02:36.797 SYMLINK libspdk_event_accel.so 00:02:37.365 CC module/event/subsystems/bdev/bdev.o 00:02:37.365 LIB libspdk_event_bdev.a 00:02:37.365 SO libspdk_event_bdev.so.6.0 00:02:37.624 SYMLINK libspdk_event_bdev.so 00:02:37.882 CC module/event/subsystems/scsi/scsi.o 00:02:37.882 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:37.882 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:37.882 CC module/event/subsystems/ublk/ublk.o 00:02:37.882 CC module/event/subsystems/nbd/nbd.o 00:02:37.882 LIB libspdk_event_scsi.a 00:02:37.882 LIB libspdk_event_ublk.a 00:02:37.882 LIB libspdk_event_nbd.a 00:02:38.141 SO libspdk_event_ublk.so.3.0 00:02:38.141 SO libspdk_event_scsi.so.6.0 00:02:38.141 SO libspdk_event_nbd.so.6.0 00:02:38.141 LIB libspdk_event_nvmf.a 00:02:38.141 SYMLINK libspdk_event_ublk.so 00:02:38.141 SYMLINK libspdk_event_scsi.so 00:02:38.141 SYMLINK libspdk_event_nbd.so 00:02:38.141 SO libspdk_event_nvmf.so.6.0 00:02:38.141 SYMLINK libspdk_event_nvmf.so 00:02:38.400 CC module/event/subsystems/iscsi/iscsi.o 00:02:38.400 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:38.659 LIB libspdk_event_vhost_scsi.a 00:02:38.659 LIB libspdk_event_iscsi.a 00:02:38.659 SO libspdk_event_vhost_scsi.so.3.0 00:02:38.659 SO libspdk_event_iscsi.so.6.0 00:02:38.659 SYMLINK libspdk_event_vhost_scsi.so 00:02:38.659 SYMLINK libspdk_event_iscsi.so 00:02:38.918 SO libspdk.so.6.0 00:02:38.918 SYMLINK libspdk.so 00:02:39.176 CC app/trace_record/trace_record.o 00:02:39.176 CXX app/trace/trace.o 00:02:39.176 CC app/spdk_nvme_perf/perf.o 00:02:39.176 CC app/spdk_top/spdk_top.o 00:02:39.176 CC app/spdk_nvme_discover/discovery_aer.o 00:02:39.176 CC app/spdk_lspci/spdk_lspci.o 00:02:39.176 CC app/spdk_nvme_identify/identify.o 00:02:39.176 CC test/rpc_client/rpc_client_test.o 00:02:39.176 TEST_HEADER include/spdk/accel.h 00:02:39.176 TEST_HEADER include/spdk/accel_module.h 00:02:39.176 TEST_HEADER include/spdk/barrier.h 00:02:39.176 TEST_HEADER include/spdk/assert.h 00:02:39.176 TEST_HEADER include/spdk/bdev.h 00:02:39.176 TEST_HEADER include/spdk/base64.h 00:02:39.176 TEST_HEADER include/spdk/bdev_module.h 00:02:39.176 TEST_HEADER include/spdk/bdev_zone.h 00:02:39.176 TEST_HEADER include/spdk/bit_array.h 00:02:39.176 TEST_HEADER include/spdk/bit_pool.h 00:02:39.176 TEST_HEADER include/spdk/blob_bdev.h 00:02:39.176 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:39.176 TEST_HEADER include/spdk/blobfs.h 00:02:39.176 TEST_HEADER include/spdk/blob.h 00:02:39.176 TEST_HEADER include/spdk/conf.h 00:02:39.176 TEST_HEADER include/spdk/crc16.h 00:02:39.176 TEST_HEADER include/spdk/config.h 00:02:39.176 TEST_HEADER include/spdk/cpuset.h 00:02:39.176 TEST_HEADER include/spdk/crc32.h 00:02:39.176 TEST_HEADER include/spdk/crc64.h 00:02:39.176 TEST_HEADER include/spdk/dma.h 00:02:39.176 TEST_HEADER include/spdk/dif.h 00:02:39.176 TEST_HEADER include/spdk/endian.h 00:02:39.176 TEST_HEADER include/spdk/env_dpdk.h 00:02:39.176 TEST_HEADER include/spdk/env.h 00:02:39.176 TEST_HEADER include/spdk/event.h 00:02:39.176 TEST_HEADER include/spdk/fd_group.h 00:02:39.176 TEST_HEADER include/spdk/fd.h 00:02:39.176 TEST_HEADER include/spdk/fsdev.h 00:02:39.176 TEST_HEADER include/spdk/file.h 00:02:39.176 CC app/iscsi_tgt/iscsi_tgt.o 00:02:39.176 TEST_HEADER include/spdk/ftl.h 00:02:39.176 TEST_HEADER include/spdk/fsdev_module.h 00:02:39.176 TEST_HEADER include/spdk/gpt_spec.h 00:02:39.176 TEST_HEADER include/spdk/histogram_data.h 00:02:39.176 TEST_HEADER include/spdk/hexlify.h 00:02:39.176 TEST_HEADER include/spdk/idxd.h 00:02:39.176 TEST_HEADER include/spdk/idxd_spec.h 00:02:39.176 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:39.176 TEST_HEADER include/spdk/init.h 00:02:39.176 TEST_HEADER include/spdk/ioat.h 00:02:39.176 TEST_HEADER include/spdk/ioat_spec.h 00:02:39.176 TEST_HEADER include/spdk/iscsi_spec.h 00:02:39.176 CC app/spdk_dd/spdk_dd.o 00:02:39.176 TEST_HEADER include/spdk/json.h 00:02:39.176 TEST_HEADER include/spdk/jsonrpc.h 00:02:39.176 TEST_HEADER include/spdk/keyring.h 00:02:39.176 TEST_HEADER include/spdk/keyring_module.h 00:02:39.176 TEST_HEADER include/spdk/likely.h 00:02:39.176 TEST_HEADER include/spdk/lvol.h 00:02:39.176 CC app/nvmf_tgt/nvmf_main.o 00:02:39.176 TEST_HEADER include/spdk/md5.h 00:02:39.176 TEST_HEADER include/spdk/log.h 00:02:39.176 TEST_HEADER include/spdk/mmio.h 00:02:39.176 TEST_HEADER include/spdk/memory.h 00:02:39.176 TEST_HEADER include/spdk/nbd.h 00:02:39.177 TEST_HEADER include/spdk/nvme.h 00:02:39.177 TEST_HEADER include/spdk/nvme_intel.h 00:02:39.177 TEST_HEADER include/spdk/notify.h 00:02:39.177 TEST_HEADER include/spdk/net.h 00:02:39.177 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:39.177 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:39.177 TEST_HEADER include/spdk/nvme_spec.h 00:02:39.177 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:39.447 TEST_HEADER include/spdk/nvme_zns.h 00:02:39.447 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:39.447 TEST_HEADER include/spdk/nvmf.h 00:02:39.447 TEST_HEADER include/spdk/nvmf_transport.h 00:02:39.447 TEST_HEADER include/spdk/nvmf_spec.h 00:02:39.447 TEST_HEADER include/spdk/opal_spec.h 00:02:39.447 TEST_HEADER include/spdk/opal.h 00:02:39.447 TEST_HEADER include/spdk/pipe.h 00:02:39.447 TEST_HEADER include/spdk/pci_ids.h 00:02:39.447 TEST_HEADER include/spdk/queue.h 00:02:39.447 TEST_HEADER include/spdk/rpc.h 00:02:39.447 TEST_HEADER include/spdk/reduce.h 00:02:39.447 TEST_HEADER include/spdk/scheduler.h 00:02:39.447 TEST_HEADER include/spdk/scsi.h 00:02:39.447 CC app/spdk_tgt/spdk_tgt.o 00:02:39.447 TEST_HEADER include/spdk/scsi_spec.h 00:02:39.447 TEST_HEADER include/spdk/sock.h 00:02:39.447 TEST_HEADER include/spdk/stdinc.h 00:02:39.447 TEST_HEADER include/spdk/thread.h 00:02:39.447 TEST_HEADER include/spdk/trace.h 00:02:39.447 TEST_HEADER include/spdk/string.h 00:02:39.447 TEST_HEADER include/spdk/trace_parser.h 00:02:39.447 TEST_HEADER include/spdk/tree.h 00:02:39.447 TEST_HEADER include/spdk/uuid.h 00:02:39.447 TEST_HEADER include/spdk/ublk.h 00:02:39.447 TEST_HEADER include/spdk/util.h 00:02:39.447 TEST_HEADER include/spdk/version.h 00:02:39.447 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:39.447 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:39.447 TEST_HEADER include/spdk/vhost.h 00:02:39.447 TEST_HEADER include/spdk/xor.h 00:02:39.447 TEST_HEADER include/spdk/vmd.h 00:02:39.447 CXX test/cpp_headers/accel.o 00:02:39.447 TEST_HEADER include/spdk/zipf.h 00:02:39.447 CXX test/cpp_headers/accel_module.o 00:02:39.447 CXX test/cpp_headers/assert.o 00:02:39.447 CXX test/cpp_headers/barrier.o 00:02:39.447 CXX test/cpp_headers/bdev_module.o 00:02:39.447 CXX test/cpp_headers/base64.o 00:02:39.447 CXX test/cpp_headers/bdev_zone.o 00:02:39.447 CXX test/cpp_headers/bdev.o 00:02:39.447 CXX test/cpp_headers/bit_pool.o 00:02:39.447 CXX test/cpp_headers/bit_array.o 00:02:39.447 CXX test/cpp_headers/blob_bdev.o 00:02:39.447 CXX test/cpp_headers/blobfs.o 00:02:39.447 CXX test/cpp_headers/blobfs_bdev.o 00:02:39.447 CXX test/cpp_headers/blob.o 00:02:39.447 CXX test/cpp_headers/conf.o 00:02:39.447 CXX test/cpp_headers/config.o 00:02:39.447 CXX test/cpp_headers/cpuset.o 00:02:39.447 CXX test/cpp_headers/crc64.o 00:02:39.447 CXX test/cpp_headers/crc32.o 00:02:39.447 CXX test/cpp_headers/crc16.o 00:02:39.447 CXX test/cpp_headers/dma.o 00:02:39.447 CXX test/cpp_headers/dif.o 00:02:39.447 CXX test/cpp_headers/endian.o 00:02:39.447 CXX test/cpp_headers/env.o 00:02:39.447 CXX test/cpp_headers/env_dpdk.o 00:02:39.447 CXX test/cpp_headers/event.o 00:02:39.447 CXX test/cpp_headers/fd.o 00:02:39.447 CXX test/cpp_headers/fd_group.o 00:02:39.447 CXX test/cpp_headers/file.o 00:02:39.447 CXX test/cpp_headers/fsdev.o 00:02:39.447 CXX test/cpp_headers/fsdev_module.o 00:02:39.447 CXX test/cpp_headers/ftl.o 00:02:39.447 CXX test/cpp_headers/histogram_data.o 00:02:39.447 CXX test/cpp_headers/gpt_spec.o 00:02:39.447 CXX test/cpp_headers/hexlify.o 00:02:39.447 CXX test/cpp_headers/idxd.o 00:02:39.447 CXX test/cpp_headers/idxd_spec.o 00:02:39.447 CXX test/cpp_headers/ioat.o 00:02:39.447 CXX test/cpp_headers/init.o 00:02:39.447 CXX test/cpp_headers/iscsi_spec.o 00:02:39.447 CXX test/cpp_headers/ioat_spec.o 00:02:39.447 CXX test/cpp_headers/jsonrpc.o 00:02:39.447 CXX test/cpp_headers/keyring.o 00:02:39.447 CXX test/cpp_headers/json.o 00:02:39.447 CXX test/cpp_headers/log.o 00:02:39.447 CXX test/cpp_headers/likely.o 00:02:39.447 CXX test/cpp_headers/keyring_module.o 00:02:39.447 CXX test/cpp_headers/lvol.o 00:02:39.447 CXX test/cpp_headers/memory.o 00:02:39.447 CXX test/cpp_headers/md5.o 00:02:39.447 CXX test/cpp_headers/mmio.o 00:02:39.447 CXX test/cpp_headers/nbd.o 00:02:39.447 CXX test/cpp_headers/net.o 00:02:39.447 CXX test/cpp_headers/notify.o 00:02:39.447 CXX test/cpp_headers/nvme.o 00:02:39.447 CXX test/cpp_headers/nvme_intel.o 00:02:39.447 CXX test/cpp_headers/nvme_ocssd.o 00:02:39.447 CXX test/cpp_headers/nvme_spec.o 00:02:39.447 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:39.447 CXX test/cpp_headers/nvme_zns.o 00:02:39.447 CXX test/cpp_headers/nvmf_cmd.o 00:02:39.447 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:39.447 CXX test/cpp_headers/nvmf.o 00:02:39.447 CXX test/cpp_headers/nvmf_spec.o 00:02:39.447 CXX test/cpp_headers/nvmf_transport.o 00:02:39.447 CXX test/cpp_headers/opal.o 00:02:39.447 CC examples/ioat/perf/perf.o 00:02:39.447 CXX test/cpp_headers/opal_spec.o 00:02:39.447 CC examples/ioat/verify/verify.o 00:02:39.447 CC test/thread/poller_perf/poller_perf.o 00:02:39.447 CXX test/cpp_headers/pci_ids.o 00:02:39.447 CC test/env/vtophys/vtophys.o 00:02:39.447 CC test/env/memory/memory_ut.o 00:02:39.447 CC examples/util/zipf/zipf.o 00:02:39.447 CC test/app/histogram_perf/histogram_perf.o 00:02:39.447 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:39.447 CC test/app/jsoncat/jsoncat.o 00:02:39.447 CC test/env/pci/pci_ut.o 00:02:39.447 CC app/fio/nvme/fio_plugin.o 00:02:39.447 CC test/dma/test_dma/test_dma.o 00:02:39.447 CC test/app/stub/stub.o 00:02:39.447 LINK spdk_lspci 00:02:39.710 CC app/fio/bdev/fio_plugin.o 00:02:39.710 CC test/app/bdev_svc/bdev_svc.o 00:02:39.710 LINK rpc_client_test 00:02:39.710 LINK interrupt_tgt 00:02:39.710 LINK spdk_nvme_discover 00:02:39.978 LINK iscsi_tgt 00:02:39.978 LINK nvmf_tgt 00:02:39.978 CC test/env/mem_callbacks/mem_callbacks.o 00:02:39.978 LINK spdk_trace_record 00:02:39.978 LINK vtophys 00:02:39.978 LINK histogram_perf 00:02:39.978 LINK zipf 00:02:39.978 CXX test/cpp_headers/pipe.o 00:02:39.978 CXX test/cpp_headers/queue.o 00:02:39.978 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:39.978 LINK jsoncat 00:02:39.978 CXX test/cpp_headers/reduce.o 00:02:39.978 CXX test/cpp_headers/rpc.o 00:02:39.978 CXX test/cpp_headers/scheduler.o 00:02:39.978 LINK ioat_perf 00:02:39.978 CXX test/cpp_headers/scsi.o 00:02:39.978 CXX test/cpp_headers/scsi_spec.o 00:02:39.978 CXX test/cpp_headers/sock.o 00:02:39.978 CXX test/cpp_headers/stdinc.o 00:02:39.978 CXX test/cpp_headers/string.o 00:02:39.978 CXX test/cpp_headers/thread.o 00:02:40.240 LINK verify 00:02:40.240 CXX test/cpp_headers/trace.o 00:02:40.240 CXX test/cpp_headers/trace_parser.o 00:02:40.240 CXX test/cpp_headers/tree.o 00:02:40.240 CXX test/cpp_headers/ublk.o 00:02:40.240 CXX test/cpp_headers/util.o 00:02:40.240 CXX test/cpp_headers/uuid.o 00:02:40.240 CXX test/cpp_headers/version.o 00:02:40.240 CXX test/cpp_headers/vfio_user_pci.o 00:02:40.240 CXX test/cpp_headers/vfio_user_spec.o 00:02:40.240 CXX test/cpp_headers/vhost.o 00:02:40.240 LINK poller_perf 00:02:40.240 CXX test/cpp_headers/vmd.o 00:02:40.240 CXX test/cpp_headers/xor.o 00:02:40.240 CXX test/cpp_headers/zipf.o 00:02:40.240 LINK spdk_tgt 00:02:40.240 LINK bdev_svc 00:02:40.240 LINK env_dpdk_post_init 00:02:40.240 LINK spdk_dd 00:02:40.240 LINK stub 00:02:40.240 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:40.240 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:40.240 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:40.240 LINK spdk_trace 00:02:40.497 LINK spdk_bdev 00:02:40.497 LINK pci_ut 00:02:40.497 LINK spdk_nvme 00:02:40.498 LINK test_dma 00:02:40.498 CC examples/idxd/perf/perf.o 00:02:40.498 CC examples/vmd/led/led.o 00:02:40.498 CC examples/sock/hello_world/hello_sock.o 00:02:40.498 CC examples/vmd/lsvmd/lsvmd.o 00:02:40.498 LINK nvme_fuzz 00:02:40.498 LINK spdk_nvme_identify 00:02:40.759 CC examples/thread/thread/thread_ex.o 00:02:40.759 CC test/event/reactor/reactor.o 00:02:40.759 CC test/event/reactor_perf/reactor_perf.o 00:02:40.759 CC test/event/event_perf/event_perf.o 00:02:40.759 CC test/event/app_repeat/app_repeat.o 00:02:40.759 LINK spdk_nvme_perf 00:02:40.759 CC test/event/scheduler/scheduler.o 00:02:40.759 LINK mem_callbacks 00:02:40.759 LINK vhost_fuzz 00:02:40.759 LINK spdk_top 00:02:40.759 LINK led 00:02:40.759 LINK lsvmd 00:02:40.759 CC app/vhost/vhost.o 00:02:40.759 LINK reactor_perf 00:02:40.759 LINK reactor 00:02:40.759 LINK event_perf 00:02:40.759 LINK hello_sock 00:02:40.759 LINK app_repeat 00:02:40.759 LINK idxd_perf 00:02:41.018 LINK thread 00:02:41.018 LINK scheduler 00:02:41.018 LINK vhost 00:02:41.018 LINK memory_ut 00:02:41.018 CC test/nvme/sgl/sgl.o 00:02:41.018 CC test/nvme/reset/reset.o 00:02:41.018 CC test/nvme/connect_stress/connect_stress.o 00:02:41.018 CC test/nvme/reserve/reserve.o 00:02:41.018 CC test/nvme/aer/aer.o 00:02:41.018 CC test/nvme/e2edp/nvme_dp.o 00:02:41.018 CC test/nvme/compliance/nvme_compliance.o 00:02:41.018 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:41.018 CC test/nvme/simple_copy/simple_copy.o 00:02:41.018 CC test/nvme/startup/startup.o 00:02:41.018 CC test/nvme/fdp/fdp.o 00:02:41.018 CC test/nvme/cuse/cuse.o 00:02:41.018 CC test/nvme/fused_ordering/fused_ordering.o 00:02:41.018 CC test/nvme/boot_partition/boot_partition.o 00:02:41.018 CC test/nvme/err_injection/err_injection.o 00:02:41.018 CC test/nvme/overhead/overhead.o 00:02:41.018 CC test/accel/dif/dif.o 00:02:41.018 CC test/blobfs/mkfs/mkfs.o 00:02:41.275 CC test/lvol/esnap/esnap.o 00:02:41.275 LINK connect_stress 00:02:41.275 LINK boot_partition 00:02:41.275 LINK reserve 00:02:41.275 LINK err_injection 00:02:41.275 LINK doorbell_aers 00:02:41.275 LINK fused_ordering 00:02:41.275 LINK startup 00:02:41.275 LINK sgl 00:02:41.275 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:41.275 CC examples/nvme/hotplug/hotplug.o 00:02:41.275 CC examples/nvme/hello_world/hello_world.o 00:02:41.275 CC examples/nvme/abort/abort.o 00:02:41.275 CC examples/nvme/reconnect/reconnect.o 00:02:41.275 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:41.275 CC examples/nvme/arbitration/arbitration.o 00:02:41.275 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:41.275 LINK simple_copy 00:02:41.275 LINK reset 00:02:41.275 LINK mkfs 00:02:41.275 LINK nvme_dp 00:02:41.275 LINK aer 00:02:41.275 LINK overhead 00:02:41.275 LINK nvme_compliance 00:02:41.275 LINK fdp 00:02:41.533 CC examples/accel/perf/accel_perf.o 00:02:41.533 CC examples/blob/cli/blobcli.o 00:02:41.533 CC examples/blob/hello_world/hello_blob.o 00:02:41.533 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:41.533 LINK pmr_persistence 00:02:41.533 LINK cmb_copy 00:02:41.533 LINK hello_world 00:02:41.533 LINK hotplug 00:02:41.533 LINK arbitration 00:02:41.533 LINK reconnect 00:02:41.791 LINK abort 00:02:41.791 LINK dif 00:02:41.791 LINK hello_blob 00:02:41.791 LINK iscsi_fuzz 00:02:41.791 LINK hello_fsdev 00:02:41.791 LINK nvme_manage 00:02:41.791 LINK accel_perf 00:02:41.791 LINK blobcli 00:02:42.049 LINK cuse 00:02:42.308 CC test/bdev/bdevio/bdevio.o 00:02:42.308 CC examples/bdev/hello_world/hello_bdev.o 00:02:42.308 CC examples/bdev/bdevperf/bdevperf.o 00:02:42.567 LINK bdevio 00:02:42.567 LINK hello_bdev 00:02:43.134 LINK bdevperf 00:02:43.393 CC examples/nvmf/nvmf/nvmf.o 00:02:43.651 LINK nvmf 00:02:45.029 LINK esnap 00:02:45.029 00:02:45.029 real 0m56.512s 00:02:45.029 user 8m24.926s 00:02:45.029 sys 3m57.457s 00:02:45.029 09:40:54 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:45.029 09:40:54 make -- common/autotest_common.sh@10 -- $ set +x 00:02:45.029 ************************************ 00:02:45.029 END TEST make 00:02:45.029 ************************************ 00:02:45.029 09:40:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:45.029 09:40:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:45.029 09:40:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:45.029 09:40:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.029 09:40:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:45.029 09:40:54 -- pm/common@44 -- $ pid=3984161 00:02:45.029 09:40:54 -- pm/common@50 -- $ kill -TERM 3984161 00:02:45.029 09:40:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.029 09:40:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:45.029 09:40:54 -- pm/common@44 -- $ pid=3984162 00:02:45.029 09:40:54 -- pm/common@50 -- $ kill -TERM 3984162 00:02:45.029 09:40:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.029 09:40:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:45.029 09:40:54 -- pm/common@44 -- $ pid=3984165 00:02:45.029 09:40:54 -- pm/common@50 -- $ kill -TERM 3984165 00:02:45.029 09:40:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.029 09:40:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:45.029 09:40:54 -- pm/common@44 -- $ pid=3984188 00:02:45.029 09:40:54 -- pm/common@50 -- $ sudo -E kill -TERM 3984188 00:02:45.029 09:40:54 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:45.029 09:40:54 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:45.288 09:40:54 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:45.288 09:40:54 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:45.288 09:40:54 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:45.288 09:40:54 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:45.288 09:40:54 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:45.288 09:40:54 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:45.288 09:40:54 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:45.288 09:40:54 -- scripts/common.sh@336 -- # IFS=.-: 00:02:45.288 09:40:54 -- scripts/common.sh@336 -- # read -ra ver1 00:02:45.288 09:40:54 -- scripts/common.sh@337 -- # IFS=.-: 00:02:45.288 09:40:54 -- scripts/common.sh@337 -- # read -ra ver2 00:02:45.288 09:40:54 -- scripts/common.sh@338 -- # local 'op=<' 00:02:45.288 09:40:54 -- scripts/common.sh@340 -- # ver1_l=2 00:02:45.288 09:40:54 -- scripts/common.sh@341 -- # ver2_l=1 00:02:45.288 09:40:54 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:45.288 09:40:54 -- scripts/common.sh@344 -- # case "$op" in 00:02:45.288 09:40:54 -- scripts/common.sh@345 -- # : 1 00:02:45.288 09:40:54 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:45.288 09:40:54 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:45.288 09:40:54 -- scripts/common.sh@365 -- # decimal 1 00:02:45.288 09:40:54 -- scripts/common.sh@353 -- # local d=1 00:02:45.288 09:40:54 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:45.288 09:40:54 -- scripts/common.sh@355 -- # echo 1 00:02:45.288 09:40:54 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:45.288 09:40:54 -- scripts/common.sh@366 -- # decimal 2 00:02:45.288 09:40:54 -- scripts/common.sh@353 -- # local d=2 00:02:45.288 09:40:54 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:45.288 09:40:54 -- scripts/common.sh@355 -- # echo 2 00:02:45.288 09:40:54 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:45.288 09:40:54 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:45.288 09:40:54 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:45.288 09:40:54 -- scripts/common.sh@368 -- # return 0 00:02:45.288 09:40:54 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:45.288 09:40:54 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:45.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.288 --rc genhtml_branch_coverage=1 00:02:45.288 --rc genhtml_function_coverage=1 00:02:45.288 --rc genhtml_legend=1 00:02:45.288 --rc geninfo_all_blocks=1 00:02:45.288 --rc geninfo_unexecuted_blocks=1 00:02:45.288 00:02:45.288 ' 00:02:45.288 09:40:54 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:45.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.288 --rc genhtml_branch_coverage=1 00:02:45.288 --rc genhtml_function_coverage=1 00:02:45.288 --rc genhtml_legend=1 00:02:45.288 --rc geninfo_all_blocks=1 00:02:45.288 --rc geninfo_unexecuted_blocks=1 00:02:45.288 00:02:45.288 ' 00:02:45.288 09:40:54 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:45.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.288 --rc genhtml_branch_coverage=1 00:02:45.288 --rc genhtml_function_coverage=1 00:02:45.288 --rc genhtml_legend=1 00:02:45.288 --rc geninfo_all_blocks=1 00:02:45.288 --rc geninfo_unexecuted_blocks=1 00:02:45.288 00:02:45.288 ' 00:02:45.288 09:40:54 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:45.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.288 --rc genhtml_branch_coverage=1 00:02:45.288 --rc genhtml_function_coverage=1 00:02:45.288 --rc genhtml_legend=1 00:02:45.288 --rc geninfo_all_blocks=1 00:02:45.288 --rc geninfo_unexecuted_blocks=1 00:02:45.288 00:02:45.288 ' 00:02:45.288 09:40:54 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:45.288 09:40:54 -- nvmf/common.sh@7 -- # uname -s 00:02:45.288 09:40:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:45.288 09:40:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:45.288 09:40:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:45.288 09:40:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:45.288 09:40:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:45.288 09:40:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:45.288 09:40:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:45.288 09:40:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:45.288 09:40:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:45.288 09:40:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:45.288 09:40:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:02:45.288 09:40:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:02:45.288 09:40:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:45.288 09:40:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:45.288 09:40:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:45.288 09:40:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:45.288 09:40:54 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:45.288 09:40:54 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:45.288 09:40:54 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:45.288 09:40:54 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:45.288 09:40:54 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:45.288 09:40:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.288 09:40:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.288 09:40:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.288 09:40:54 -- paths/export.sh@5 -- # export PATH 00:02:45.288 09:40:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.288 09:40:54 -- nvmf/common.sh@51 -- # : 0 00:02:45.288 09:40:54 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:45.288 09:40:54 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:45.289 09:40:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:45.289 09:40:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:45.289 09:40:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:45.289 09:40:54 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:45.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:45.289 09:40:54 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:45.289 09:40:54 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:45.289 09:40:54 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:45.289 09:40:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:45.289 09:40:54 -- spdk/autotest.sh@32 -- # uname -s 00:02:45.289 09:40:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:45.289 09:40:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:45.289 09:40:54 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:45.289 09:40:54 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:45.289 09:40:54 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:45.289 09:40:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:45.289 09:40:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:45.289 09:40:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:45.289 09:40:54 -- spdk/autotest.sh@48 -- # udevadm_pid=4047703 00:02:45.289 09:40:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:45.289 09:40:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:45.289 09:40:54 -- pm/common@17 -- # local monitor 00:02:45.289 09:40:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.289 09:40:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.289 09:40:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.289 09:40:54 -- pm/common@21 -- # date +%s 00:02:45.289 09:40:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.289 09:40:54 -- pm/common@21 -- # date +%s 00:02:45.289 09:40:54 -- pm/common@25 -- # sleep 1 00:02:45.289 09:40:54 -- pm/common@21 -- # date +%s 00:02:45.289 09:40:54 -- pm/common@21 -- # date +%s 00:02:45.289 09:40:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733906454 00:02:45.289 09:40:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733906454 00:02:45.289 09:40:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733906454 00:02:45.289 09:40:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733906454 00:02:45.289 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733906454_collect-vmstat.pm.log 00:02:45.289 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733906454_collect-cpu-load.pm.log 00:02:45.289 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733906454_collect-cpu-temp.pm.log 00:02:45.289 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733906454_collect-bmc-pm.bmc.pm.log 00:02:46.225 09:40:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:46.225 09:40:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:46.225 09:40:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:46.225 09:40:55 -- common/autotest_common.sh@10 -- # set +x 00:02:46.225 09:40:55 -- spdk/autotest.sh@59 -- # create_test_list 00:02:46.225 09:40:55 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:46.225 09:40:55 -- common/autotest_common.sh@10 -- # set +x 00:02:46.483 09:40:55 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:46.483 09:40:55 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:46.483 09:40:55 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:46.483 09:40:55 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:46.483 09:40:55 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:46.483 09:40:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:46.484 09:40:55 -- common/autotest_common.sh@1457 -- # uname 00:02:46.484 09:40:55 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:46.484 09:40:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:46.484 09:40:55 -- common/autotest_common.sh@1477 -- # uname 00:02:46.484 09:40:55 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:46.484 09:40:55 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:46.484 09:40:55 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:46.484 lcov: LCOV version 1.15 00:02:46.484 09:40:55 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:04.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:04.572 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:11.139 09:41:20 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:11.139 09:41:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:11.139 09:41:20 -- common/autotest_common.sh@10 -- # set +x 00:03:11.139 09:41:20 -- spdk/autotest.sh@78 -- # rm -f 00:03:11.139 09:41:20 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.429 0000:5f:00.0 (1b96 2600): Already using the nvme driver 00:03:14.429 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:14.429 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:14.429 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:14.429 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:14.429 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:14.429 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:14.429 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:14.429 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:14.429 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:14.429 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:14.429 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:14.429 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:14.429 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:14.688 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:14.688 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:14.688 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:14.688 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:14.688 09:41:24 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:14.688 09:41:24 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:14.688 09:41:24 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:14.688 09:41:24 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:14.688 09:41:24 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:14.688 09:41:24 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:14.688 09:41:24 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:14.688 09:41:24 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:03:14.688 09:41:24 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:14.688 09:41:24 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:14.688 09:41:24 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:14.688 09:41:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:14.688 09:41:24 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:14.688 09:41:24 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:14.688 09:41:24 -- common/autotest_common.sh@1669 -- # bdf=0000:5f:00.0 00:03:14.688 09:41:24 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:14.688 09:41:24 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:14.688 09:41:24 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:14.688 09:41:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:14.688 09:41:24 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:14.688 09:41:24 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:14.688 09:41:24 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:03:14.688 09:41:24 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:14.688 09:41:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:14.688 09:41:24 -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:03:14.688 09:41:24 -- common/autotest_common.sh@1672 -- # zoned_ctrls["$nvme"]=0000:5f:00.0 00:03:14.688 09:41:24 -- common/autotest_common.sh@1673 -- # continue 2 00:03:14.688 09:41:24 -- common/autotest_common.sh@1678 -- # for nvme in "${!zoned_ctrls[@]}" 00:03:14.688 09:41:24 -- common/autotest_common.sh@1679 -- # for ns in "$nvme/"nvme*n* 00:03:14.688 09:41:24 -- common/autotest_common.sh@1680 -- # zoned_devs["${ns##*/}"]=0000:5f:00.0 00:03:14.688 09:41:24 -- common/autotest_common.sh@1679 -- # for ns in "$nvme/"nvme*n* 00:03:14.688 09:41:24 -- common/autotest_common.sh@1680 -- # zoned_devs["${ns##*/}"]=0000:5f:00.0 00:03:14.688 09:41:24 -- spdk/autotest.sh@85 -- # (( 2 > 0 )) 00:03:14.688 09:41:24 -- spdk/autotest.sh@90 -- # export 'PCI_BLOCKED=0000:5f:00.0 0000:5f:00.0' 00:03:14.688 09:41:24 -- spdk/autotest.sh@90 -- # PCI_BLOCKED='0000:5f:00.0 0000:5f:00.0' 00:03:14.688 09:41:24 -- spdk/autotest.sh@91 -- # export 'PCI_ZONED=0000:5f:00.0 0000:5f:00.0' 00:03:14.688 09:41:24 -- spdk/autotest.sh@91 -- # PCI_ZONED='0000:5f:00.0 0000:5f:00.0' 00:03:14.688 09:41:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:14.688 09:41:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:14.688 09:41:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:14.688 09:41:24 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:14.689 09:41:24 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:14.689 No valid GPT data, bailing 00:03:14.689 09:41:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:14.689 09:41:24 -- scripts/common.sh@394 -- # pt= 00:03:14.689 09:41:24 -- scripts/common.sh@395 -- # return 1 00:03:14.689 09:41:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:14.689 1+0 records in 00:03:14.689 1+0 records out 00:03:14.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00573888 s, 183 MB/s 00:03:14.689 09:41:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:14.689 09:41:24 -- spdk/autotest.sh@99 -- # [[ -z 0000:5f:00.0 ]] 00:03:14.689 09:41:24 -- spdk/autotest.sh@99 -- # continue 00:03:14.689 09:41:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:14.689 09:41:24 -- spdk/autotest.sh@99 -- # [[ -z 0000:5f:00.0 ]] 00:03:14.689 09:41:24 -- spdk/autotest.sh@99 -- # continue 00:03:14.689 09:41:24 -- spdk/autotest.sh@105 -- # sync 00:03:14.689 09:41:24 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:14.689 09:41:24 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:14.689 09:41:24 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:21.318 09:41:29 -- spdk/autotest.sh@111 -- # uname -s 00:03:21.318 09:41:29 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:21.318 09:41:29 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:21.318 09:41:29 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:23.856 Hugepages 00:03:23.856 node hugesize free / total 00:03:23.856 node0 1048576kB 0 / 0 00:03:23.856 node0 2048kB 0 / 0 00:03:23.856 node1 1048576kB 0 / 0 00:03:23.856 node1 2048kB 0 / 0 00:03:23.856 00:03:23.856 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:23.856 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:23.856 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:23.856 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:23.856 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:23.856 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:23.856 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:23.856 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:23.856 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:23.856 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:23.856 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:03:23.856 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:23.856 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:23.856 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:23.856 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:23.856 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:23.856 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:23.856 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:23.856 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:23.857 09:41:33 -- spdk/autotest.sh@117 -- # uname -s 00:03:23.857 09:41:33 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:23.857 09:41:33 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:23.857 09:41:33 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.150 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:27.150 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:27.150 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:27.150 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:27.150 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:27.150 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:27.150 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:27.150 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:27.150 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:27.150 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:27.150 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:27.150 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:27.150 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:27.150 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:27.150 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:27.150 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:27.150 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:28.088 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:28.088 09:41:37 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:29.027 09:41:38 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:29.027 09:41:38 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:29.027 09:41:38 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:29.027 09:41:38 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:29.027 09:41:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:29.027 09:41:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:29.027 09:41:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:29.027 09:41:38 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:29.027 09:41:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:29.286 09:41:38 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:29.286 09:41:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:29.286 09:41:38 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.577 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:32.577 Waiting for block devices as requested 00:03:32.577 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:32.577 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:32.835 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:32.836 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:32.836 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:33.095 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:33.095 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:33.095 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:33.354 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:33.354 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:33.354 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:33.354 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:33.612 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:33.612 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:33.612 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:33.872 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:33.872 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:33.872 09:41:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:33.872 09:41:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:33.872 09:41:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:33.872 09:41:43 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:33.872 09:41:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:33.872 09:41:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:33.872 09:41:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:33.872 09:41:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:33.872 09:41:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:33.872 09:41:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:33.872 09:41:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:33.872 09:41:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:33.872 09:41:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:33.872 09:41:43 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:33.872 09:41:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:33.872 09:41:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:33.872 09:41:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:33.872 09:41:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:33.872 09:41:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:33.872 09:41:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:33.872 09:41:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:33.872 09:41:43 -- common/autotest_common.sh@1543 -- # continue 00:03:33.872 09:41:43 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:33.872 09:41:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:33.872 09:41:43 -- common/autotest_common.sh@10 -- # set +x 00:03:34.131 09:41:43 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:34.131 09:41:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:34.131 09:41:43 -- common/autotest_common.sh@10 -- # set +x 00:03:34.131 09:41:43 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:37.420 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:37.420 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:37.420 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:37.420 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:37.420 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:37.420 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:37.420 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:37.420 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:37.420 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:37.420 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:37.420 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:37.420 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:37.420 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:37.420 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:37.420 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:37.420 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:37.420 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:38.358 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:38.358 09:41:47 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:38.358 09:41:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:38.359 09:41:47 -- common/autotest_common.sh@10 -- # set +x 00:03:38.618 09:41:47 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:38.618 09:41:47 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:38.618 09:41:47 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:38.618 09:41:47 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:38.618 09:41:47 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:38.618 09:41:47 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:38.618 09:41:47 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:38.618 09:41:47 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:38.618 09:41:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:38.618 09:41:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:38.618 09:41:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:38.618 09:41:47 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:38.618 09:41:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:38.618 09:41:48 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:38.618 09:41:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:38.618 09:41:48 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:38.618 09:41:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:38.618 09:41:48 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:38.618 09:41:48 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:38.618 09:41:48 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:38.618 09:41:48 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:38.618 09:41:48 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:38.618 09:41:48 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:38.618 09:41:48 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=4062831 00:03:38.618 09:41:48 -- common/autotest_common.sh@1585 -- # waitforlisten 4062831 00:03:38.618 09:41:48 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.618 09:41:48 -- common/autotest_common.sh@835 -- # '[' -z 4062831 ']' 00:03:38.618 09:41:48 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:38.618 09:41:48 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:38.618 09:41:48 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:38.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:38.618 09:41:48 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:38.618 09:41:48 -- common/autotest_common.sh@10 -- # set +x 00:03:38.618 [2024-12-11 09:41:48.100050] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:03:38.618 [2024-12-11 09:41:48.100099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4062831 ] 00:03:38.618 [2024-12-11 09:41:48.181918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:38.877 [2024-12-11 09:41:48.221405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:38.877 09:41:48 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:38.877 09:41:48 -- common/autotest_common.sh@868 -- # return 0 00:03:38.877 09:41:48 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:38.877 09:41:48 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:38.877 09:41:48 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:42.165 nvme0n1 00:03:42.165 09:41:51 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:42.165 [2024-12-11 09:41:51.613656] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:42.165 [2024-12-11 09:41:51.613686] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:42.165 request: 00:03:42.165 { 00:03:42.165 "nvme_ctrlr_name": "nvme0", 00:03:42.166 "password": "test", 00:03:42.166 "method": "bdev_nvme_opal_revert", 00:03:42.166 "req_id": 1 00:03:42.166 } 00:03:42.166 Got JSON-RPC error response 00:03:42.166 response: 00:03:42.166 { 00:03:42.166 "code": -32603, 00:03:42.166 "message": "Internal error" 00:03:42.166 } 00:03:42.166 09:41:51 -- common/autotest_common.sh@1591 -- # true 00:03:42.166 09:41:51 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:42.166 09:41:51 -- common/autotest_common.sh@1595 -- # killprocess 4062831 00:03:42.166 09:41:51 -- common/autotest_common.sh@954 -- # '[' -z 4062831 ']' 00:03:42.166 09:41:51 -- common/autotest_common.sh@958 -- # kill -0 4062831 00:03:42.166 09:41:51 -- common/autotest_common.sh@959 -- # uname 00:03:42.166 09:41:51 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:42.166 09:41:51 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4062831 00:03:42.166 09:41:51 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:42.166 09:41:51 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:42.166 09:41:51 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4062831' 00:03:42.166 killing process with pid 4062831 00:03:42.166 09:41:51 -- common/autotest_common.sh@973 -- # kill 4062831 00:03:42.166 09:41:51 -- common/autotest_common.sh@978 -- # wait 4062831 00:03:44.071 09:41:53 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:44.071 09:41:53 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:44.071 09:41:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:44.071 09:41:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:44.071 09:41:53 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:44.071 09:41:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.071 09:41:53 -- common/autotest_common.sh@10 -- # set +x 00:03:44.071 09:41:53 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:44.071 09:41:53 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:44.071 09:41:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.071 09:41:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.071 09:41:53 -- common/autotest_common.sh@10 -- # set +x 00:03:44.071 ************************************ 00:03:44.071 START TEST env 00:03:44.071 ************************************ 00:03:44.071 09:41:53 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:44.071 * Looking for test storage... 00:03:44.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:44.071 09:41:53 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:44.071 09:41:53 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:44.071 09:41:53 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:44.071 09:41:53 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:44.071 09:41:53 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.071 09:41:53 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.071 09:41:53 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.071 09:41:53 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.071 09:41:53 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.071 09:41:53 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.071 09:41:53 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.071 09:41:53 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.071 09:41:53 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.071 09:41:53 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.071 09:41:53 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.071 09:41:53 env -- scripts/common.sh@344 -- # case "$op" in 00:03:44.071 09:41:53 env -- scripts/common.sh@345 -- # : 1 00:03:44.071 09:41:53 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.071 09:41:53 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.071 09:41:53 env -- scripts/common.sh@365 -- # decimal 1 00:03:44.071 09:41:53 env -- scripts/common.sh@353 -- # local d=1 00:03:44.071 09:41:53 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.071 09:41:53 env -- scripts/common.sh@355 -- # echo 1 00:03:44.071 09:41:53 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.071 09:41:53 env -- scripts/common.sh@366 -- # decimal 2 00:03:44.071 09:41:53 env -- scripts/common.sh@353 -- # local d=2 00:03:44.071 09:41:53 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.071 09:41:53 env -- scripts/common.sh@355 -- # echo 2 00:03:44.071 09:41:53 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.071 09:41:53 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.071 09:41:53 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.071 09:41:53 env -- scripts/common.sh@368 -- # return 0 00:03:44.071 09:41:53 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.071 09:41:53 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:44.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.071 --rc genhtml_branch_coverage=1 00:03:44.071 --rc genhtml_function_coverage=1 00:03:44.071 --rc genhtml_legend=1 00:03:44.071 --rc geninfo_all_blocks=1 00:03:44.071 --rc geninfo_unexecuted_blocks=1 00:03:44.071 00:03:44.071 ' 00:03:44.071 09:41:53 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:44.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.071 --rc genhtml_branch_coverage=1 00:03:44.071 --rc genhtml_function_coverage=1 00:03:44.071 --rc genhtml_legend=1 00:03:44.071 --rc geninfo_all_blocks=1 00:03:44.071 --rc geninfo_unexecuted_blocks=1 00:03:44.071 00:03:44.071 ' 00:03:44.071 09:41:53 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:44.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.071 --rc genhtml_branch_coverage=1 00:03:44.071 --rc genhtml_function_coverage=1 00:03:44.071 --rc genhtml_legend=1 00:03:44.071 --rc geninfo_all_blocks=1 00:03:44.071 --rc geninfo_unexecuted_blocks=1 00:03:44.071 00:03:44.071 ' 00:03:44.071 09:41:53 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:44.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.071 --rc genhtml_branch_coverage=1 00:03:44.071 --rc genhtml_function_coverage=1 00:03:44.071 --rc genhtml_legend=1 00:03:44.071 --rc geninfo_all_blocks=1 00:03:44.071 --rc geninfo_unexecuted_blocks=1 00:03:44.071 00:03:44.071 ' 00:03:44.071 09:41:53 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:44.071 09:41:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.071 09:41:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.071 09:41:53 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.071 ************************************ 00:03:44.071 START TEST env_memory 00:03:44.071 ************************************ 00:03:44.071 09:41:53 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:44.071 00:03:44.071 00:03:44.071 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.071 http://cunit.sourceforge.net/ 00:03:44.071 00:03:44.071 00:03:44.071 Suite: memory 00:03:44.071 Test: alloc and free memory map ...[2024-12-11 09:41:53.561996] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:44.071 passed 00:03:44.071 Test: mem map translation ...[2024-12-11 09:41:53.581051] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:44.072 [2024-12-11 09:41:53.581066] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:44.072 [2024-12-11 09:41:53.581102] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:44.072 [2024-12-11 09:41:53.581109] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:44.072 passed 00:03:44.072 Test: mem map registration ...[2024-12-11 09:41:53.617639] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:44.072 [2024-12-11 09:41:53.617653] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:44.072 passed 00:03:44.332 Test: mem map adjacent registrations ...passed 00:03:44.332 00:03:44.332 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.332 suites 1 1 n/a 0 0 00:03:44.332 tests 4 4 4 0 0 00:03:44.332 asserts 152 152 152 0 n/a 00:03:44.332 00:03:44.332 Elapsed time = 0.136 seconds 00:03:44.332 00:03:44.332 real 0m0.149s 00:03:44.332 user 0m0.140s 00:03:44.332 sys 0m0.009s 00:03:44.332 09:41:53 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.332 09:41:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:44.332 ************************************ 00:03:44.332 END TEST env_memory 00:03:44.332 ************************************ 00:03:44.332 09:41:53 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:44.332 09:41:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.332 09:41:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.332 09:41:53 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.332 ************************************ 00:03:44.332 START TEST env_vtophys 00:03:44.332 ************************************ 00:03:44.332 09:41:53 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:44.332 EAL: lib.eal log level changed from notice to debug 00:03:44.332 EAL: Detected lcore 0 as core 0 on socket 0 00:03:44.332 EAL: Detected lcore 1 as core 1 on socket 0 00:03:44.332 EAL: Detected lcore 2 as core 2 on socket 0 00:03:44.332 EAL: Detected lcore 3 as core 3 on socket 0 00:03:44.332 EAL: Detected lcore 4 as core 4 on socket 0 00:03:44.332 EAL: Detected lcore 5 as core 5 on socket 0 00:03:44.332 EAL: Detected lcore 6 as core 6 on socket 0 00:03:44.332 EAL: Detected lcore 7 as core 8 on socket 0 00:03:44.332 EAL: Detected lcore 8 as core 9 on socket 0 00:03:44.332 EAL: Detected lcore 9 as core 10 on socket 0 00:03:44.332 EAL: Detected lcore 10 as core 11 on socket 0 00:03:44.332 EAL: Detected lcore 11 as core 12 on socket 0 00:03:44.332 EAL: Detected lcore 12 as core 13 on socket 0 00:03:44.332 EAL: Detected lcore 13 as core 16 on socket 0 00:03:44.332 EAL: Detected lcore 14 as core 17 on socket 0 00:03:44.332 EAL: Detected lcore 15 as core 18 on socket 0 00:03:44.332 EAL: Detected lcore 16 as core 19 on socket 0 00:03:44.332 EAL: Detected lcore 17 as core 20 on socket 0 00:03:44.332 EAL: Detected lcore 18 as core 21 on socket 0 00:03:44.332 EAL: Detected lcore 19 as core 25 on socket 0 00:03:44.332 EAL: Detected lcore 20 as core 26 on socket 0 00:03:44.332 EAL: Detected lcore 21 as core 27 on socket 0 00:03:44.332 EAL: Detected lcore 22 as core 28 on socket 0 00:03:44.332 EAL: Detected lcore 23 as core 29 on socket 0 00:03:44.332 EAL: Detected lcore 24 as core 0 on socket 1 00:03:44.332 EAL: Detected lcore 25 as core 1 on socket 1 00:03:44.332 EAL: Detected lcore 26 as core 2 on socket 1 00:03:44.332 EAL: Detected lcore 27 as core 3 on socket 1 00:03:44.332 EAL: Detected lcore 28 as core 4 on socket 1 00:03:44.332 EAL: Detected lcore 29 as core 5 on socket 1 00:03:44.332 EAL: Detected lcore 30 as core 6 on socket 1 00:03:44.332 EAL: Detected lcore 31 as core 8 on socket 1 00:03:44.332 EAL: Detected lcore 32 as core 9 on socket 1 00:03:44.332 EAL: Detected lcore 33 as core 10 on socket 1 00:03:44.332 EAL: Detected lcore 34 as core 11 on socket 1 00:03:44.332 EAL: Detected lcore 35 as core 12 on socket 1 00:03:44.332 EAL: Detected lcore 36 as core 13 on socket 1 00:03:44.332 EAL: Detected lcore 37 as core 16 on socket 1 00:03:44.332 EAL: Detected lcore 38 as core 17 on socket 1 00:03:44.332 EAL: Detected lcore 39 as core 18 on socket 1 00:03:44.332 EAL: Detected lcore 40 as core 19 on socket 1 00:03:44.332 EAL: Detected lcore 41 as core 20 on socket 1 00:03:44.332 EAL: Detected lcore 42 as core 21 on socket 1 00:03:44.332 EAL: Detected lcore 43 as core 25 on socket 1 00:03:44.332 EAL: Detected lcore 44 as core 26 on socket 1 00:03:44.332 EAL: Detected lcore 45 as core 27 on socket 1 00:03:44.332 EAL: Detected lcore 46 as core 28 on socket 1 00:03:44.332 EAL: Detected lcore 47 as core 29 on socket 1 00:03:44.332 EAL: Detected lcore 48 as core 0 on socket 0 00:03:44.332 EAL: Detected lcore 49 as core 1 on socket 0 00:03:44.332 EAL: Detected lcore 50 as core 2 on socket 0 00:03:44.332 EAL: Detected lcore 51 as core 3 on socket 0 00:03:44.332 EAL: Detected lcore 52 as core 4 on socket 0 00:03:44.332 EAL: Detected lcore 53 as core 5 on socket 0 00:03:44.332 EAL: Detected lcore 54 as core 6 on socket 0 00:03:44.332 EAL: Detected lcore 55 as core 8 on socket 0 00:03:44.332 EAL: Detected lcore 56 as core 9 on socket 0 00:03:44.332 EAL: Detected lcore 57 as core 10 on socket 0 00:03:44.332 EAL: Detected lcore 58 as core 11 on socket 0 00:03:44.332 EAL: Detected lcore 59 as core 12 on socket 0 00:03:44.332 EAL: Detected lcore 60 as core 13 on socket 0 00:03:44.332 EAL: Detected lcore 61 as core 16 on socket 0 00:03:44.332 EAL: Detected lcore 62 as core 17 on socket 0 00:03:44.332 EAL: Detected lcore 63 as core 18 on socket 0 00:03:44.332 EAL: Detected lcore 64 as core 19 on socket 0 00:03:44.332 EAL: Detected lcore 65 as core 20 on socket 0 00:03:44.332 EAL: Detected lcore 66 as core 21 on socket 0 00:03:44.332 EAL: Detected lcore 67 as core 25 on socket 0 00:03:44.332 EAL: Detected lcore 68 as core 26 on socket 0 00:03:44.332 EAL: Detected lcore 69 as core 27 on socket 0 00:03:44.332 EAL: Detected lcore 70 as core 28 on socket 0 00:03:44.332 EAL: Detected lcore 71 as core 29 on socket 0 00:03:44.332 EAL: Detected lcore 72 as core 0 on socket 1 00:03:44.332 EAL: Detected lcore 73 as core 1 on socket 1 00:03:44.332 EAL: Detected lcore 74 as core 2 on socket 1 00:03:44.332 EAL: Detected lcore 75 as core 3 on socket 1 00:03:44.332 EAL: Detected lcore 76 as core 4 on socket 1 00:03:44.332 EAL: Detected lcore 77 as core 5 on socket 1 00:03:44.332 EAL: Detected lcore 78 as core 6 on socket 1 00:03:44.332 EAL: Detected lcore 79 as core 8 on socket 1 00:03:44.332 EAL: Detected lcore 80 as core 9 on socket 1 00:03:44.332 EAL: Detected lcore 81 as core 10 on socket 1 00:03:44.332 EAL: Detected lcore 82 as core 11 on socket 1 00:03:44.332 EAL: Detected lcore 83 as core 12 on socket 1 00:03:44.332 EAL: Detected lcore 84 as core 13 on socket 1 00:03:44.332 EAL: Detected lcore 85 as core 16 on socket 1 00:03:44.332 EAL: Detected lcore 86 as core 17 on socket 1 00:03:44.332 EAL: Detected lcore 87 as core 18 on socket 1 00:03:44.332 EAL: Detected lcore 88 as core 19 on socket 1 00:03:44.332 EAL: Detected lcore 89 as core 20 on socket 1 00:03:44.332 EAL: Detected lcore 90 as core 21 on socket 1 00:03:44.332 EAL: Detected lcore 91 as core 25 on socket 1 00:03:44.332 EAL: Detected lcore 92 as core 26 on socket 1 00:03:44.332 EAL: Detected lcore 93 as core 27 on socket 1 00:03:44.332 EAL: Detected lcore 94 as core 28 on socket 1 00:03:44.332 EAL: Detected lcore 95 as core 29 on socket 1 00:03:44.332 EAL: Maximum logical cores by configuration: 128 00:03:44.332 EAL: Detected CPU lcores: 96 00:03:44.332 EAL: Detected NUMA nodes: 2 00:03:44.332 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:44.332 EAL: Detected shared linkage of DPDK 00:03:44.332 EAL: No shared files mode enabled, IPC will be disabled 00:03:44.332 EAL: Bus pci wants IOVA as 'DC' 00:03:44.332 EAL: Buses did not request a specific IOVA mode. 00:03:44.332 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:44.332 EAL: Selected IOVA mode 'VA' 00:03:44.332 EAL: Probing VFIO support... 00:03:44.332 EAL: IOMMU type 1 (Type 1) is supported 00:03:44.332 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:44.332 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:44.332 EAL: VFIO support initialized 00:03:44.332 EAL: Ask a virtual area of 0x2e000 bytes 00:03:44.332 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:44.332 EAL: Setting up physically contiguous memory... 00:03:44.332 EAL: Setting maximum number of open files to 524288 00:03:44.332 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:44.332 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:44.332 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:44.332 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.332 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:44.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.332 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.332 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:44.332 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:44.332 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.332 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:44.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.332 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.332 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:44.332 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:44.332 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.332 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:44.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.332 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.332 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:44.332 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:44.332 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.332 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:44.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.332 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.332 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:44.332 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:44.332 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:44.332 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.332 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:44.332 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.332 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.332 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:44.332 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:44.332 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.332 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:44.333 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.333 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.333 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:44.333 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:44.333 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.333 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:44.333 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.333 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.333 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:44.333 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:44.333 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.333 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:44.333 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.333 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.333 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:44.333 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:44.333 EAL: Hugepages will be freed exactly as allocated. 00:03:44.333 EAL: No shared files mode enabled, IPC is disabled 00:03:44.333 EAL: No shared files mode enabled, IPC is disabled 00:03:44.333 EAL: TSC frequency is ~2100000 KHz 00:03:44.333 EAL: Main lcore 0 is ready (tid=7fec6cc77a00;cpuset=[0]) 00:03:44.333 EAL: Trying to obtain current memory policy. 00:03:44.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.333 EAL: Restoring previous memory policy: 0 00:03:44.333 EAL: request: mp_malloc_sync 00:03:44.333 EAL: No shared files mode enabled, IPC is disabled 00:03:44.333 EAL: Heap on socket 0 was expanded by 2MB 00:03:44.333 EAL: No shared files mode enabled, IPC is disabled 00:03:44.333 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:44.333 EAL: Mem event callback 'spdk:(nil)' registered 00:03:44.333 00:03:44.333 00:03:44.333 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.333 http://cunit.sourceforge.net/ 00:03:44.333 00:03:44.333 00:03:44.333 Suite: components_suite 00:03:44.333 Test: vtophys_malloc_test ...passed 00:03:44.333 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:44.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.333 EAL: Restoring previous memory policy: 4 00:03:44.333 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.333 EAL: request: mp_malloc_sync 00:03:44.333 EAL: No shared files mode enabled, IPC is disabled 00:03:44.333 EAL: Heap on socket 0 was expanded by 4MB 00:03:44.333 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.333 EAL: request: mp_malloc_sync 00:03:44.333 EAL: No shared files mode enabled, IPC is disabled 00:03:44.333 EAL: Heap on socket 0 was shrunk by 4MB 00:03:44.333 EAL: Trying to obtain current memory policy. 00:03:44.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.333 EAL: Restoring previous memory policy: 4 00:03:44.333 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.333 EAL: request: mp_malloc_sync 00:03:44.333 EAL: No shared files mode enabled, IPC is disabled 00:03:44.333 EAL: Heap on socket 0 was expanded by 6MB 00:03:44.333 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.333 EAL: request: mp_malloc_sync 00:03:44.333 EAL: No shared files mode enabled, IPC is disabled 00:03:44.333 EAL: Heap on socket 0 was shrunk by 6MB 00:03:44.333 EAL: Trying to obtain current memory policy. 00:03:44.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.333 EAL: Restoring previous memory policy: 4 00:03:44.333 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.333 EAL: request: mp_malloc_sync 00:03:44.333 EAL: No shared files mode enabled, IPC is disabled 00:03:44.333 EAL: Heap on socket 0 was expanded by 10MB 00:03:44.333 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.333 EAL: request: mp_malloc_sync 00:03:44.333 EAL: No shared files mode enabled, IPC is disabled 00:03:44.333 EAL: Heap on socket 0 was shrunk by 10MB 00:03:44.333 EAL: Trying to obtain current memory policy. 00:03:44.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.333 EAL: Restoring previous memory policy: 4 00:03:44.333 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.333 EAL: request: mp_malloc_sync 00:03:44.333 EAL: No shared files mode enabled, IPC is disabled 00:03:44.333 EAL: Heap on socket 0 was expanded by 18MB 00:03:44.333 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.333 EAL: request: mp_malloc_sync 00:03:44.333 EAL: No shared files mode enabled, IPC is disabled 00:03:44.333 EAL: Heap on socket 0 was shrunk by 18MB 00:03:44.333 EAL: Trying to obtain current memory policy. 00:03:44.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.333 EAL: Restoring previous memory policy: 4 00:03:44.333 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.333 EAL: request: mp_malloc_sync 00:03:44.333 EAL: No shared files mode enabled, IPC is disabled 00:03:44.333 EAL: Heap on socket 0 was expanded by 34MB 00:03:44.333 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.333 EAL: request: mp_malloc_sync 00:03:44.333 EAL: No shared files mode enabled, IPC is disabled 00:03:44.333 EAL: Heap on socket 0 was shrunk by 34MB 00:03:44.333 EAL: Trying to obtain current memory policy. 00:03:44.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.333 EAL: Restoring previous memory policy: 4 00:03:44.333 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.333 EAL: request: mp_malloc_sync 00:03:44.333 EAL: No shared files mode enabled, IPC is disabled 00:03:44.333 EAL: Heap on socket 0 was expanded by 66MB 00:03:44.333 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.333 EAL: request: mp_malloc_sync 00:03:44.333 EAL: No shared files mode enabled, IPC is disabled 00:03:44.333 EAL: Heap on socket 0 was shrunk by 66MB 00:03:44.333 EAL: Trying to obtain current memory policy. 00:03:44.333 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.592 EAL: Restoring previous memory policy: 4 00:03:44.592 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.592 EAL: request: mp_malloc_sync 00:03:44.592 EAL: No shared files mode enabled, IPC is disabled 00:03:44.592 EAL: Heap on socket 0 was expanded by 130MB 00:03:44.592 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.592 EAL: request: mp_malloc_sync 00:03:44.592 EAL: No shared files mode enabled, IPC is disabled 00:03:44.592 EAL: Heap on socket 0 was shrunk by 130MB 00:03:44.592 EAL: Trying to obtain current memory policy. 00:03:44.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.592 EAL: Restoring previous memory policy: 4 00:03:44.592 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.592 EAL: request: mp_malloc_sync 00:03:44.592 EAL: No shared files mode enabled, IPC is disabled 00:03:44.592 EAL: Heap on socket 0 was expanded by 258MB 00:03:44.592 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.592 EAL: request: mp_malloc_sync 00:03:44.592 EAL: No shared files mode enabled, IPC is disabled 00:03:44.592 EAL: Heap on socket 0 was shrunk by 258MB 00:03:44.593 EAL: Trying to obtain current memory policy. 00:03:44.593 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.851 EAL: Restoring previous memory policy: 4 00:03:44.851 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.851 EAL: request: mp_malloc_sync 00:03:44.851 EAL: No shared files mode enabled, IPC is disabled 00:03:44.851 EAL: Heap on socket 0 was expanded by 514MB 00:03:44.851 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.851 EAL: request: mp_malloc_sync 00:03:44.851 EAL: No shared files mode enabled, IPC is disabled 00:03:44.851 EAL: Heap on socket 0 was shrunk by 514MB 00:03:44.851 EAL: Trying to obtain current memory policy. 00:03:44.852 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.110 EAL: Restoring previous memory policy: 4 00:03:45.110 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.110 EAL: request: mp_malloc_sync 00:03:45.110 EAL: No shared files mode enabled, IPC is disabled 00:03:45.110 EAL: Heap on socket 0 was expanded by 1026MB 00:03:45.369 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.369 EAL: request: mp_malloc_sync 00:03:45.369 EAL: No shared files mode enabled, IPC is disabled 00:03:45.369 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:45.369 passed 00:03:45.369 00:03:45.369 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.369 suites 1 1 n/a 0 0 00:03:45.369 tests 2 2 2 0 0 00:03:45.369 asserts 497 497 497 0 n/a 00:03:45.369 00:03:45.369 Elapsed time = 0.969 seconds 00:03:45.369 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.369 EAL: request: mp_malloc_sync 00:03:45.369 EAL: No shared files mode enabled, IPC is disabled 00:03:45.369 EAL: Heap on socket 0 was shrunk by 2MB 00:03:45.369 EAL: No shared files mode enabled, IPC is disabled 00:03:45.369 EAL: No shared files mode enabled, IPC is disabled 00:03:45.369 EAL: No shared files mode enabled, IPC is disabled 00:03:45.369 00:03:45.369 real 0m1.115s 00:03:45.369 user 0m0.650s 00:03:45.369 sys 0m0.431s 00:03:45.369 09:41:54 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.369 09:41:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:45.369 ************************************ 00:03:45.369 END TEST env_vtophys 00:03:45.369 ************************************ 00:03:45.369 09:41:54 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:45.369 09:41:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:45.369 09:41:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.369 09:41:54 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.369 ************************************ 00:03:45.369 START TEST env_pci 00:03:45.369 ************************************ 00:03:45.369 09:41:54 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:45.369 00:03:45.369 00:03:45.369 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.369 http://cunit.sourceforge.net/ 00:03:45.369 00:03:45.369 00:03:45.369 Suite: pci 00:03:45.369 Test: pci_hook ...[2024-12-11 09:41:54.940962] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4064082 has claimed it 00:03:45.632 EAL: Cannot find device (10000:00:01.0) 00:03:45.632 EAL: Failed to attach device on primary process 00:03:45.632 passed 00:03:45.632 00:03:45.632 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.632 suites 1 1 n/a 0 0 00:03:45.632 tests 1 1 1 0 0 00:03:45.632 asserts 25 25 25 0 n/a 00:03:45.632 00:03:45.632 Elapsed time = 0.031 seconds 00:03:45.632 00:03:45.632 real 0m0.052s 00:03:45.632 user 0m0.017s 00:03:45.632 sys 0m0.035s 00:03:45.632 09:41:54 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.632 09:41:54 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:45.632 ************************************ 00:03:45.632 END TEST env_pci 00:03:45.632 ************************************ 00:03:45.632 09:41:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:45.632 09:41:55 env -- env/env.sh@15 -- # uname 00:03:45.632 09:41:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:45.632 09:41:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:45.632 09:41:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:45.632 09:41:55 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:45.633 09:41:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.633 09:41:55 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.633 ************************************ 00:03:45.633 START TEST env_dpdk_post_init 00:03:45.633 ************************************ 00:03:45.633 09:41:55 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:45.633 EAL: Detected CPU lcores: 96 00:03:45.633 EAL: Detected NUMA nodes: 2 00:03:45.633 EAL: Detected shared linkage of DPDK 00:03:45.633 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:45.633 EAL: Selected IOVA mode 'VA' 00:03:45.633 EAL: VFIO support initialized 00:03:45.633 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:45.633 EAL: Using IOMMU type 1 (Type 1) 00:03:45.633 EAL: Ignore mapping IO port bar(1) 00:03:45.633 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:45.893 EAL: Ignore mapping IO port bar(1) 00:03:45.893 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:45.893 EAL: Ignore mapping IO port bar(1) 00:03:45.893 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:45.893 EAL: Ignore mapping IO port bar(1) 00:03:45.893 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:45.893 EAL: Ignore mapping IO port bar(1) 00:03:45.893 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:45.893 EAL: Ignore mapping IO port bar(1) 00:03:45.893 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:45.893 EAL: Ignore mapping IO port bar(1) 00:03:45.893 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:45.893 EAL: Ignore mapping IO port bar(1) 00:03:45.893 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:46.461 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:46.720 EAL: Ignore mapping IO port bar(1) 00:03:46.720 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:46.720 EAL: Ignore mapping IO port bar(1) 00:03:46.720 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:46.720 EAL: Ignore mapping IO port bar(1) 00:03:46.720 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:46.720 EAL: Ignore mapping IO port bar(1) 00:03:46.720 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:46.720 EAL: Ignore mapping IO port bar(1) 00:03:46.720 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:46.720 EAL: Ignore mapping IO port bar(1) 00:03:46.720 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:46.720 EAL: Ignore mapping IO port bar(1) 00:03:46.720 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:46.720 EAL: Ignore mapping IO port bar(1) 00:03:46.720 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:50.008 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:50.008 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:50.008 Starting DPDK initialization... 00:03:50.008 Starting SPDK post initialization... 00:03:50.008 SPDK NVMe probe 00:03:50.008 Attaching to 0000:5e:00.0 00:03:50.008 Attached to 0000:5e:00.0 00:03:50.008 Cleaning up... 00:03:50.008 00:03:50.008 real 0m4.386s 00:03:50.008 user 0m2.983s 00:03:50.008 sys 0m0.471s 00:03:50.008 09:41:59 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.008 09:41:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:50.008 ************************************ 00:03:50.008 END TEST env_dpdk_post_init 00:03:50.008 ************************************ 00:03:50.008 09:41:59 env -- env/env.sh@26 -- # uname 00:03:50.008 09:41:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:50.008 09:41:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:50.008 09:41:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.008 09:41:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.008 09:41:59 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.008 ************************************ 00:03:50.008 START TEST env_mem_callbacks 00:03:50.008 ************************************ 00:03:50.008 09:41:59 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:50.008 EAL: Detected CPU lcores: 96 00:03:50.008 EAL: Detected NUMA nodes: 2 00:03:50.008 EAL: Detected shared linkage of DPDK 00:03:50.008 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:50.008 EAL: Selected IOVA mode 'VA' 00:03:50.008 EAL: VFIO support initialized 00:03:50.008 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:50.008 00:03:50.008 00:03:50.008 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.008 http://cunit.sourceforge.net/ 00:03:50.008 00:03:50.008 00:03:50.008 Suite: memory 00:03:50.008 Test: test ... 00:03:50.008 register 0x200000200000 2097152 00:03:50.008 malloc 3145728 00:03:50.008 register 0x200000400000 4194304 00:03:50.008 buf 0x200000500000 len 3145728 PASSED 00:03:50.008 malloc 64 00:03:50.008 buf 0x2000004fff40 len 64 PASSED 00:03:50.008 malloc 4194304 00:03:50.008 register 0x200000800000 6291456 00:03:50.008 buf 0x200000a00000 len 4194304 PASSED 00:03:50.008 free 0x200000500000 3145728 00:03:50.008 free 0x2000004fff40 64 00:03:50.008 unregister 0x200000400000 4194304 PASSED 00:03:50.008 free 0x200000a00000 4194304 00:03:50.008 unregister 0x200000800000 6291456 PASSED 00:03:50.008 malloc 8388608 00:03:50.008 register 0x200000400000 10485760 00:03:50.008 buf 0x200000600000 len 8388608 PASSED 00:03:50.008 free 0x200000600000 8388608 00:03:50.008 unregister 0x200000400000 10485760 PASSED 00:03:50.008 passed 00:03:50.008 00:03:50.008 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.008 suites 1 1 n/a 0 0 00:03:50.008 tests 1 1 1 0 0 00:03:50.008 asserts 15 15 15 0 n/a 00:03:50.008 00:03:50.008 Elapsed time = 0.008 seconds 00:03:50.008 00:03:50.008 real 0m0.063s 00:03:50.008 user 0m0.022s 00:03:50.008 sys 0m0.041s 00:03:50.008 09:41:59 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.008 09:41:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:50.008 ************************************ 00:03:50.008 END TEST env_mem_callbacks 00:03:50.008 ************************************ 00:03:50.267 00:03:50.267 real 0m6.305s 00:03:50.267 user 0m4.050s 00:03:50.267 sys 0m1.323s 00:03:50.267 09:41:59 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.267 09:41:59 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.267 ************************************ 00:03:50.267 END TEST env 00:03:50.267 ************************************ 00:03:50.267 09:41:59 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:50.267 09:41:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.267 09:41:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.267 09:41:59 -- common/autotest_common.sh@10 -- # set +x 00:03:50.267 ************************************ 00:03:50.267 START TEST rpc 00:03:50.267 ************************************ 00:03:50.267 09:41:59 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:50.267 * Looking for test storage... 00:03:50.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:50.267 09:41:59 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:50.267 09:41:59 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:50.267 09:41:59 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:50.526 09:41:59 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:50.526 09:41:59 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.526 09:41:59 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.526 09:41:59 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.526 09:41:59 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.526 09:41:59 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.526 09:41:59 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.526 09:41:59 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.526 09:41:59 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.526 09:41:59 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.526 09:41:59 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.526 09:41:59 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.526 09:41:59 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:50.526 09:41:59 rpc -- scripts/common.sh@345 -- # : 1 00:03:50.526 09:41:59 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.526 09:41:59 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.526 09:41:59 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:50.526 09:41:59 rpc -- scripts/common.sh@353 -- # local d=1 00:03:50.526 09:41:59 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.526 09:41:59 rpc -- scripts/common.sh@355 -- # echo 1 00:03:50.526 09:41:59 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.526 09:41:59 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:50.526 09:41:59 rpc -- scripts/common.sh@353 -- # local d=2 00:03:50.526 09:41:59 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.526 09:41:59 rpc -- scripts/common.sh@355 -- # echo 2 00:03:50.526 09:41:59 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.526 09:41:59 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.526 09:41:59 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.526 09:41:59 rpc -- scripts/common.sh@368 -- # return 0 00:03:50.526 09:41:59 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.526 09:41:59 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:50.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.526 --rc genhtml_branch_coverage=1 00:03:50.526 --rc genhtml_function_coverage=1 00:03:50.526 --rc genhtml_legend=1 00:03:50.526 --rc geninfo_all_blocks=1 00:03:50.526 --rc geninfo_unexecuted_blocks=1 00:03:50.526 00:03:50.526 ' 00:03:50.526 09:41:59 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:50.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.526 --rc genhtml_branch_coverage=1 00:03:50.526 --rc genhtml_function_coverage=1 00:03:50.526 --rc genhtml_legend=1 00:03:50.526 --rc geninfo_all_blocks=1 00:03:50.526 --rc geninfo_unexecuted_blocks=1 00:03:50.526 00:03:50.526 ' 00:03:50.526 09:41:59 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:50.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.526 --rc genhtml_branch_coverage=1 00:03:50.526 --rc genhtml_function_coverage=1 00:03:50.526 --rc genhtml_legend=1 00:03:50.526 --rc geninfo_all_blocks=1 00:03:50.526 --rc geninfo_unexecuted_blocks=1 00:03:50.526 00:03:50.526 ' 00:03:50.526 09:41:59 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:50.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.526 --rc genhtml_branch_coverage=1 00:03:50.526 --rc genhtml_function_coverage=1 00:03:50.527 --rc genhtml_legend=1 00:03:50.527 --rc geninfo_all_blocks=1 00:03:50.527 --rc geninfo_unexecuted_blocks=1 00:03:50.527 00:03:50.527 ' 00:03:50.527 09:41:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4064961 00:03:50.527 09:41:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:50.527 09:41:59 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:50.527 09:41:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4064961 00:03:50.527 09:41:59 rpc -- common/autotest_common.sh@835 -- # '[' -z 4064961 ']' 00:03:50.527 09:41:59 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:50.527 09:41:59 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:50.527 09:41:59 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:50.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:50.527 09:41:59 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:50.527 09:41:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.527 [2024-12-11 09:41:59.919815] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:03:50.527 [2024-12-11 09:41:59.919865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4064961 ] 00:03:50.527 [2024-12-11 09:41:59.996303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.527 [2024-12-11 09:42:00.044468] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:50.527 [2024-12-11 09:42:00.044510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4064961' to capture a snapshot of events at runtime. 00:03:50.527 [2024-12-11 09:42:00.044519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:50.527 [2024-12-11 09:42:00.044526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:50.527 [2024-12-11 09:42:00.044532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4064961 for offline analysis/debug. 00:03:50.527 [2024-12-11 09:42:00.045083] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.785 09:42:00 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:50.785 09:42:00 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:50.786 09:42:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:50.786 09:42:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:50.786 09:42:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:50.786 09:42:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:50.786 09:42:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.786 09:42:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.786 09:42:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.786 ************************************ 00:03:50.786 START TEST rpc_integrity 00:03:50.786 ************************************ 00:03:50.786 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:50.786 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:50.786 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.786 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.786 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.786 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:50.786 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:50.786 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:50.786 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:50.786 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.786 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.169 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.169 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:51.169 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:51.169 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.169 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.169 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.169 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:51.169 { 00:03:51.169 "name": "Malloc0", 00:03:51.169 "aliases": [ 00:03:51.169 "6e3005f0-d6ee-4a72-abed-91e78b3083dd" 00:03:51.169 ], 00:03:51.169 "product_name": "Malloc disk", 00:03:51.169 "block_size": 512, 00:03:51.169 "num_blocks": 16384, 00:03:51.169 "uuid": "6e3005f0-d6ee-4a72-abed-91e78b3083dd", 00:03:51.169 "assigned_rate_limits": { 00:03:51.169 "rw_ios_per_sec": 0, 00:03:51.169 "rw_mbytes_per_sec": 0, 00:03:51.169 "r_mbytes_per_sec": 0, 00:03:51.169 "w_mbytes_per_sec": 0 00:03:51.169 }, 00:03:51.169 "claimed": false, 00:03:51.169 "zoned": false, 00:03:51.169 "supported_io_types": { 00:03:51.169 "read": true, 00:03:51.169 "write": true, 00:03:51.169 "unmap": true, 00:03:51.169 "flush": true, 00:03:51.169 "reset": true, 00:03:51.169 "nvme_admin": false, 00:03:51.169 "nvme_io": false, 00:03:51.169 "nvme_io_md": false, 00:03:51.169 "write_zeroes": true, 00:03:51.169 "zcopy": true, 00:03:51.169 "get_zone_info": false, 00:03:51.169 "zone_management": false, 00:03:51.169 "zone_append": false, 00:03:51.169 "compare": false, 00:03:51.169 "compare_and_write": false, 00:03:51.169 "abort": true, 00:03:51.169 "seek_hole": false, 00:03:51.169 "seek_data": false, 00:03:51.169 "copy": true, 00:03:51.169 "nvme_iov_md": false 00:03:51.169 }, 00:03:51.169 "memory_domains": [ 00:03:51.169 { 00:03:51.169 "dma_device_id": "system", 00:03:51.169 "dma_device_type": 1 00:03:51.169 }, 00:03:51.169 { 00:03:51.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.169 "dma_device_type": 2 00:03:51.169 } 00:03:51.169 ], 00:03:51.169 "driver_specific": {} 00:03:51.169 } 00:03:51.169 ]' 00:03:51.169 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:51.169 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:51.169 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:51.169 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.169 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.169 [2024-12-11 09:42:00.433009] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:51.169 [2024-12-11 09:42:00.433039] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:51.169 [2024-12-11 09:42:00.433051] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfe8ac0 00:03:51.169 [2024-12-11 09:42:00.433058] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:51.169 [2024-12-11 09:42:00.434126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:51.169 [2024-12-11 09:42:00.434147] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:51.169 Passthru0 00:03:51.169 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.169 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:51.169 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.169 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.169 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.169 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:51.169 { 00:03:51.169 "name": "Malloc0", 00:03:51.169 "aliases": [ 00:03:51.169 "6e3005f0-d6ee-4a72-abed-91e78b3083dd" 00:03:51.169 ], 00:03:51.169 "product_name": "Malloc disk", 00:03:51.169 "block_size": 512, 00:03:51.169 "num_blocks": 16384, 00:03:51.169 "uuid": "6e3005f0-d6ee-4a72-abed-91e78b3083dd", 00:03:51.169 "assigned_rate_limits": { 00:03:51.169 "rw_ios_per_sec": 0, 00:03:51.169 "rw_mbytes_per_sec": 0, 00:03:51.169 "r_mbytes_per_sec": 0, 00:03:51.169 "w_mbytes_per_sec": 0 00:03:51.169 }, 00:03:51.169 "claimed": true, 00:03:51.169 "claim_type": "exclusive_write", 00:03:51.169 "zoned": false, 00:03:51.169 "supported_io_types": { 00:03:51.169 "read": true, 00:03:51.169 "write": true, 00:03:51.169 "unmap": true, 00:03:51.169 "flush": true, 00:03:51.169 "reset": true, 00:03:51.169 "nvme_admin": false, 00:03:51.169 "nvme_io": false, 00:03:51.169 "nvme_io_md": false, 00:03:51.169 "write_zeroes": true, 00:03:51.169 "zcopy": true, 00:03:51.169 "get_zone_info": false, 00:03:51.169 "zone_management": false, 00:03:51.169 "zone_append": false, 00:03:51.169 "compare": false, 00:03:51.169 "compare_and_write": false, 00:03:51.169 "abort": true, 00:03:51.169 "seek_hole": false, 00:03:51.169 "seek_data": false, 00:03:51.169 "copy": true, 00:03:51.169 "nvme_iov_md": false 00:03:51.169 }, 00:03:51.169 "memory_domains": [ 00:03:51.169 { 00:03:51.169 "dma_device_id": "system", 00:03:51.169 "dma_device_type": 1 00:03:51.169 }, 00:03:51.169 { 00:03:51.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.169 "dma_device_type": 2 00:03:51.169 } 00:03:51.169 ], 00:03:51.169 "driver_specific": {} 00:03:51.169 }, 00:03:51.169 { 00:03:51.169 "name": "Passthru0", 00:03:51.169 "aliases": [ 00:03:51.169 "63f5c8e3-4bd3-5b1c-9695-1d3d30c35746" 00:03:51.169 ], 00:03:51.169 "product_name": "passthru", 00:03:51.169 "block_size": 512, 00:03:51.169 "num_blocks": 16384, 00:03:51.169 "uuid": "63f5c8e3-4bd3-5b1c-9695-1d3d30c35746", 00:03:51.169 "assigned_rate_limits": { 00:03:51.169 "rw_ios_per_sec": 0, 00:03:51.169 "rw_mbytes_per_sec": 0, 00:03:51.169 "r_mbytes_per_sec": 0, 00:03:51.169 "w_mbytes_per_sec": 0 00:03:51.169 }, 00:03:51.169 "claimed": false, 00:03:51.169 "zoned": false, 00:03:51.169 "supported_io_types": { 00:03:51.169 "read": true, 00:03:51.169 "write": true, 00:03:51.169 "unmap": true, 00:03:51.169 "flush": true, 00:03:51.169 "reset": true, 00:03:51.169 "nvme_admin": false, 00:03:51.169 "nvme_io": false, 00:03:51.169 "nvme_io_md": false, 00:03:51.169 "write_zeroes": true, 00:03:51.169 "zcopy": true, 00:03:51.169 "get_zone_info": false, 00:03:51.169 "zone_management": false, 00:03:51.169 "zone_append": false, 00:03:51.169 "compare": false, 00:03:51.169 "compare_and_write": false, 00:03:51.169 "abort": true, 00:03:51.169 "seek_hole": false, 00:03:51.169 "seek_data": false, 00:03:51.169 "copy": true, 00:03:51.169 "nvme_iov_md": false 00:03:51.169 }, 00:03:51.169 "memory_domains": [ 00:03:51.169 { 00:03:51.169 "dma_device_id": "system", 00:03:51.169 "dma_device_type": 1 00:03:51.169 }, 00:03:51.169 { 00:03:51.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.169 "dma_device_type": 2 00:03:51.169 } 00:03:51.169 ], 00:03:51.169 "driver_specific": { 00:03:51.169 "passthru": { 00:03:51.169 "name": "Passthru0", 00:03:51.169 "base_bdev_name": "Malloc0" 00:03:51.169 } 00:03:51.169 } 00:03:51.169 } 00:03:51.169 ]' 00:03:51.169 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:51.169 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:51.169 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:51.169 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.169 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.169 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.169 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:51.169 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.169 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.169 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.169 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:51.170 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.170 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.170 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.170 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:51.170 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:51.170 09:42:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:51.170 00:03:51.170 real 0m0.272s 00:03:51.170 user 0m0.174s 00:03:51.170 sys 0m0.035s 00:03:51.170 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.170 09:42:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.170 ************************************ 00:03:51.170 END TEST rpc_integrity 00:03:51.170 ************************************ 00:03:51.170 09:42:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:51.170 09:42:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.170 09:42:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.170 09:42:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.170 ************************************ 00:03:51.170 START TEST rpc_plugins 00:03:51.170 ************************************ 00:03:51.170 09:42:00 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:51.170 09:42:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:51.170 09:42:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.170 09:42:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.170 09:42:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.170 09:42:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:51.170 09:42:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:51.170 09:42:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.170 09:42:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.170 09:42:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.170 09:42:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:51.170 { 00:03:51.170 "name": "Malloc1", 00:03:51.170 "aliases": [ 00:03:51.170 "fc9f526d-b45e-4bce-9ccf-d8de89c3f62d" 00:03:51.170 ], 00:03:51.170 "product_name": "Malloc disk", 00:03:51.170 "block_size": 4096, 00:03:51.170 "num_blocks": 256, 00:03:51.170 "uuid": "fc9f526d-b45e-4bce-9ccf-d8de89c3f62d", 00:03:51.170 "assigned_rate_limits": { 00:03:51.170 "rw_ios_per_sec": 0, 00:03:51.170 "rw_mbytes_per_sec": 0, 00:03:51.170 "r_mbytes_per_sec": 0, 00:03:51.170 "w_mbytes_per_sec": 0 00:03:51.170 }, 00:03:51.170 "claimed": false, 00:03:51.170 "zoned": false, 00:03:51.170 "supported_io_types": { 00:03:51.170 "read": true, 00:03:51.170 "write": true, 00:03:51.170 "unmap": true, 00:03:51.170 "flush": true, 00:03:51.170 "reset": true, 00:03:51.170 "nvme_admin": false, 00:03:51.170 "nvme_io": false, 00:03:51.170 "nvme_io_md": false, 00:03:51.170 "write_zeroes": true, 00:03:51.170 "zcopy": true, 00:03:51.170 "get_zone_info": false, 00:03:51.170 "zone_management": false, 00:03:51.170 "zone_append": false, 00:03:51.170 "compare": false, 00:03:51.170 "compare_and_write": false, 00:03:51.170 "abort": true, 00:03:51.170 "seek_hole": false, 00:03:51.170 "seek_data": false, 00:03:51.170 "copy": true, 00:03:51.170 "nvme_iov_md": false 00:03:51.170 }, 00:03:51.170 "memory_domains": [ 00:03:51.170 { 00:03:51.170 "dma_device_id": "system", 00:03:51.170 "dma_device_type": 1 00:03:51.170 }, 00:03:51.170 { 00:03:51.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.170 "dma_device_type": 2 00:03:51.170 } 00:03:51.170 ], 00:03:51.170 "driver_specific": {} 00:03:51.170 } 00:03:51.170 ]' 00:03:51.170 09:42:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:51.170 09:42:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:51.170 09:42:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:51.170 09:42:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.170 09:42:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.170 09:42:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.170 09:42:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:51.170 09:42:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.170 09:42:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.488 09:42:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.488 09:42:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:51.488 09:42:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:51.488 09:42:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:51.488 00:03:51.488 real 0m0.142s 00:03:51.488 user 0m0.087s 00:03:51.488 sys 0m0.019s 00:03:51.488 09:42:00 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.488 09:42:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.488 ************************************ 00:03:51.488 END TEST rpc_plugins 00:03:51.488 ************************************ 00:03:51.488 09:42:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:51.488 09:42:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.488 09:42:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.488 09:42:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.488 ************************************ 00:03:51.488 START TEST rpc_trace_cmd_test 00:03:51.488 ************************************ 00:03:51.488 09:42:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:51.488 09:42:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:51.488 09:42:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:51.488 09:42:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.488 09:42:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:51.488 09:42:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.488 09:42:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:51.488 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4064961", 00:03:51.488 "tpoint_group_mask": "0x8", 00:03:51.488 "iscsi_conn": { 00:03:51.488 "mask": "0x2", 00:03:51.488 "tpoint_mask": "0x0" 00:03:51.488 }, 00:03:51.488 "scsi": { 00:03:51.488 "mask": "0x4", 00:03:51.488 "tpoint_mask": "0x0" 00:03:51.488 }, 00:03:51.488 "bdev": { 00:03:51.488 "mask": "0x8", 00:03:51.488 "tpoint_mask": "0xffffffffffffffff" 00:03:51.488 }, 00:03:51.488 "nvmf_rdma": { 00:03:51.488 "mask": "0x10", 00:03:51.488 "tpoint_mask": "0x0" 00:03:51.488 }, 00:03:51.488 "nvmf_tcp": { 00:03:51.488 "mask": "0x20", 00:03:51.488 "tpoint_mask": "0x0" 00:03:51.488 }, 00:03:51.488 "ftl": { 00:03:51.488 "mask": "0x40", 00:03:51.488 "tpoint_mask": "0x0" 00:03:51.488 }, 00:03:51.488 "blobfs": { 00:03:51.488 "mask": "0x80", 00:03:51.488 "tpoint_mask": "0x0" 00:03:51.488 }, 00:03:51.488 "dsa": { 00:03:51.488 "mask": "0x200", 00:03:51.488 "tpoint_mask": "0x0" 00:03:51.488 }, 00:03:51.488 "thread": { 00:03:51.488 "mask": "0x400", 00:03:51.488 "tpoint_mask": "0x0" 00:03:51.488 }, 00:03:51.488 "nvme_pcie": { 00:03:51.488 "mask": "0x800", 00:03:51.488 "tpoint_mask": "0x0" 00:03:51.488 }, 00:03:51.488 "iaa": { 00:03:51.488 "mask": "0x1000", 00:03:51.488 "tpoint_mask": "0x0" 00:03:51.488 }, 00:03:51.488 "nvme_tcp": { 00:03:51.488 "mask": "0x2000", 00:03:51.488 "tpoint_mask": "0x0" 00:03:51.488 }, 00:03:51.488 "bdev_nvme": { 00:03:51.488 "mask": "0x4000", 00:03:51.488 "tpoint_mask": "0x0" 00:03:51.488 }, 00:03:51.488 "sock": { 00:03:51.488 "mask": "0x8000", 00:03:51.488 "tpoint_mask": "0x0" 00:03:51.488 }, 00:03:51.488 "blob": { 00:03:51.488 "mask": "0x10000", 00:03:51.488 "tpoint_mask": "0x0" 00:03:51.488 }, 00:03:51.488 "bdev_raid": { 00:03:51.488 "mask": "0x20000", 00:03:51.488 "tpoint_mask": "0x0" 00:03:51.488 }, 00:03:51.488 "scheduler": { 00:03:51.488 "mask": "0x40000", 00:03:51.488 "tpoint_mask": "0x0" 00:03:51.488 } 00:03:51.488 }' 00:03:51.488 09:42:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:51.488 09:42:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:51.488 09:42:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:51.488 09:42:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:51.488 09:42:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:51.488 09:42:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:51.488 09:42:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:51.488 09:42:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:51.489 09:42:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:51.748 09:42:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:51.748 00:03:51.748 real 0m0.229s 00:03:51.748 user 0m0.191s 00:03:51.748 sys 0m0.028s 00:03:51.748 09:42:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.748 09:42:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:51.748 ************************************ 00:03:51.748 END TEST rpc_trace_cmd_test 00:03:51.748 ************************************ 00:03:51.748 09:42:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:51.748 09:42:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:51.748 09:42:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:51.748 09:42:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.748 09:42:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.748 09:42:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.748 ************************************ 00:03:51.748 START TEST rpc_daemon_integrity 00:03:51.748 ************************************ 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:51.748 { 00:03:51.748 "name": "Malloc2", 00:03:51.748 "aliases": [ 00:03:51.748 "f820d3a0-a183-49ae-b7df-f4c044bf6f85" 00:03:51.748 ], 00:03:51.748 "product_name": "Malloc disk", 00:03:51.748 "block_size": 512, 00:03:51.748 "num_blocks": 16384, 00:03:51.748 "uuid": "f820d3a0-a183-49ae-b7df-f4c044bf6f85", 00:03:51.748 "assigned_rate_limits": { 00:03:51.748 "rw_ios_per_sec": 0, 00:03:51.748 "rw_mbytes_per_sec": 0, 00:03:51.748 "r_mbytes_per_sec": 0, 00:03:51.748 "w_mbytes_per_sec": 0 00:03:51.748 }, 00:03:51.748 "claimed": false, 00:03:51.748 "zoned": false, 00:03:51.748 "supported_io_types": { 00:03:51.748 "read": true, 00:03:51.748 "write": true, 00:03:51.748 "unmap": true, 00:03:51.748 "flush": true, 00:03:51.748 "reset": true, 00:03:51.748 "nvme_admin": false, 00:03:51.748 "nvme_io": false, 00:03:51.748 "nvme_io_md": false, 00:03:51.748 "write_zeroes": true, 00:03:51.748 "zcopy": true, 00:03:51.748 "get_zone_info": false, 00:03:51.748 "zone_management": false, 00:03:51.748 "zone_append": false, 00:03:51.748 "compare": false, 00:03:51.748 "compare_and_write": false, 00:03:51.748 "abort": true, 00:03:51.748 "seek_hole": false, 00:03:51.748 "seek_data": false, 00:03:51.748 "copy": true, 00:03:51.748 "nvme_iov_md": false 00:03:51.748 }, 00:03:51.748 "memory_domains": [ 00:03:51.748 { 00:03:51.748 "dma_device_id": "system", 00:03:51.748 "dma_device_type": 1 00:03:51.748 }, 00:03:51.748 { 00:03:51.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.748 "dma_device_type": 2 00:03:51.748 } 00:03:51.748 ], 00:03:51.748 "driver_specific": {} 00:03:51.748 } 00:03:51.748 ]' 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.748 [2024-12-11 09:42:01.295337] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:51.748 [2024-12-11 09:42:01.295367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:51.748 [2024-12-11 09:42:01.295381] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xea4d10 00:03:51.748 [2024-12-11 09:42:01.295388] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:51.748 [2024-12-11 09:42:01.296392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:51.748 [2024-12-11 09:42:01.296415] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:51.748 Passthru0 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.748 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:51.749 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.749 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.749 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.749 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:51.749 { 00:03:51.749 "name": "Malloc2", 00:03:51.749 "aliases": [ 00:03:51.749 "f820d3a0-a183-49ae-b7df-f4c044bf6f85" 00:03:51.749 ], 00:03:51.749 "product_name": "Malloc disk", 00:03:51.749 "block_size": 512, 00:03:51.749 "num_blocks": 16384, 00:03:51.749 "uuid": "f820d3a0-a183-49ae-b7df-f4c044bf6f85", 00:03:51.749 "assigned_rate_limits": { 00:03:51.749 "rw_ios_per_sec": 0, 00:03:51.749 "rw_mbytes_per_sec": 0, 00:03:51.749 "r_mbytes_per_sec": 0, 00:03:51.749 "w_mbytes_per_sec": 0 00:03:51.749 }, 00:03:51.749 "claimed": true, 00:03:51.749 "claim_type": "exclusive_write", 00:03:51.749 "zoned": false, 00:03:51.749 "supported_io_types": { 00:03:51.749 "read": true, 00:03:51.749 "write": true, 00:03:51.749 "unmap": true, 00:03:51.749 "flush": true, 00:03:51.749 "reset": true, 00:03:51.749 "nvme_admin": false, 00:03:51.749 "nvme_io": false, 00:03:51.749 "nvme_io_md": false, 00:03:51.749 "write_zeroes": true, 00:03:51.749 "zcopy": true, 00:03:51.749 "get_zone_info": false, 00:03:51.749 "zone_management": false, 00:03:51.749 "zone_append": false, 00:03:51.749 "compare": false, 00:03:51.749 "compare_and_write": false, 00:03:51.749 "abort": true, 00:03:51.749 "seek_hole": false, 00:03:51.749 "seek_data": false, 00:03:51.749 "copy": true, 00:03:51.749 "nvme_iov_md": false 00:03:51.749 }, 00:03:51.749 "memory_domains": [ 00:03:51.749 { 00:03:51.749 "dma_device_id": "system", 00:03:51.749 "dma_device_type": 1 00:03:51.749 }, 00:03:51.749 { 00:03:51.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.749 "dma_device_type": 2 00:03:51.749 } 00:03:51.749 ], 00:03:51.749 "driver_specific": {} 00:03:51.749 }, 00:03:51.749 { 00:03:51.749 "name": "Passthru0", 00:03:51.749 "aliases": [ 00:03:51.749 "0e4f26c2-e634-5782-9f9f-94d8d9ca1c1f" 00:03:51.749 ], 00:03:51.749 "product_name": "passthru", 00:03:51.749 "block_size": 512, 00:03:51.749 "num_blocks": 16384, 00:03:51.749 "uuid": "0e4f26c2-e634-5782-9f9f-94d8d9ca1c1f", 00:03:51.749 "assigned_rate_limits": { 00:03:51.749 "rw_ios_per_sec": 0, 00:03:51.749 "rw_mbytes_per_sec": 0, 00:03:51.749 "r_mbytes_per_sec": 0, 00:03:51.749 "w_mbytes_per_sec": 0 00:03:51.749 }, 00:03:51.749 "claimed": false, 00:03:51.749 "zoned": false, 00:03:51.749 "supported_io_types": { 00:03:51.749 "read": true, 00:03:51.749 "write": true, 00:03:51.749 "unmap": true, 00:03:51.749 "flush": true, 00:03:51.749 "reset": true, 00:03:51.749 "nvme_admin": false, 00:03:51.749 "nvme_io": false, 00:03:51.749 "nvme_io_md": false, 00:03:51.749 "write_zeroes": true, 00:03:51.749 "zcopy": true, 00:03:51.749 "get_zone_info": false, 00:03:51.749 "zone_management": false, 00:03:51.749 "zone_append": false, 00:03:51.749 "compare": false, 00:03:51.749 "compare_and_write": false, 00:03:51.749 "abort": true, 00:03:51.749 "seek_hole": false, 00:03:51.749 "seek_data": false, 00:03:51.749 "copy": true, 00:03:51.749 "nvme_iov_md": false 00:03:51.749 }, 00:03:51.749 "memory_domains": [ 00:03:51.749 { 00:03:51.749 "dma_device_id": "system", 00:03:51.749 "dma_device_type": 1 00:03:51.749 }, 00:03:51.749 { 00:03:51.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.749 "dma_device_type": 2 00:03:51.749 } 00:03:51.749 ], 00:03:51.749 "driver_specific": { 00:03:51.749 "passthru": { 00:03:51.749 "name": "Passthru0", 00:03:51.749 "base_bdev_name": "Malloc2" 00:03:51.749 } 00:03:51.749 } 00:03:51.749 } 00:03:51.749 ]' 00:03:51.749 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:52.008 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:52.008 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:52.008 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.008 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.008 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:52.008 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:52.008 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.008 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.008 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:52.008 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:52.008 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.008 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.008 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:52.008 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:52.008 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:52.008 09:42:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:52.008 00:03:52.008 real 0m0.280s 00:03:52.008 user 0m0.181s 00:03:52.008 sys 0m0.035s 00:03:52.008 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.008 09:42:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.008 ************************************ 00:03:52.008 END TEST rpc_daemon_integrity 00:03:52.008 ************************************ 00:03:52.008 09:42:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:52.008 09:42:01 rpc -- rpc/rpc.sh@84 -- # killprocess 4064961 00:03:52.008 09:42:01 rpc -- common/autotest_common.sh@954 -- # '[' -z 4064961 ']' 00:03:52.008 09:42:01 rpc -- common/autotest_common.sh@958 -- # kill -0 4064961 00:03:52.008 09:42:01 rpc -- common/autotest_common.sh@959 -- # uname 00:03:52.008 09:42:01 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:52.008 09:42:01 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4064961 00:03:52.008 09:42:01 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:52.008 09:42:01 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:52.008 09:42:01 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4064961' 00:03:52.008 killing process with pid 4064961 00:03:52.008 09:42:01 rpc -- common/autotest_common.sh@973 -- # kill 4064961 00:03:52.008 09:42:01 rpc -- common/autotest_common.sh@978 -- # wait 4064961 00:03:52.267 00:03:52.267 real 0m2.136s 00:03:52.267 user 0m2.756s 00:03:52.267 sys 0m0.670s 00:03:52.267 09:42:01 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.267 09:42:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.267 ************************************ 00:03:52.267 END TEST rpc 00:03:52.267 ************************************ 00:03:52.526 09:42:01 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:52.526 09:42:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.526 09:42:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.526 09:42:01 -- common/autotest_common.sh@10 -- # set +x 00:03:52.526 ************************************ 00:03:52.526 START TEST skip_rpc 00:03:52.526 ************************************ 00:03:52.526 09:42:01 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:52.526 * Looking for test storage... 00:03:52.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:52.526 09:42:01 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:52.526 09:42:01 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:52.526 09:42:01 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:52.526 09:42:02 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.526 09:42:02 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:52.526 09:42:02 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.526 09:42:02 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:52.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.526 --rc genhtml_branch_coverage=1 00:03:52.526 --rc genhtml_function_coverage=1 00:03:52.526 --rc genhtml_legend=1 00:03:52.526 --rc geninfo_all_blocks=1 00:03:52.526 --rc geninfo_unexecuted_blocks=1 00:03:52.526 00:03:52.526 ' 00:03:52.526 09:42:02 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:52.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.526 --rc genhtml_branch_coverage=1 00:03:52.526 --rc genhtml_function_coverage=1 00:03:52.526 --rc genhtml_legend=1 00:03:52.526 --rc geninfo_all_blocks=1 00:03:52.526 --rc geninfo_unexecuted_blocks=1 00:03:52.526 00:03:52.526 ' 00:03:52.526 09:42:02 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:52.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.526 --rc genhtml_branch_coverage=1 00:03:52.526 --rc genhtml_function_coverage=1 00:03:52.526 --rc genhtml_legend=1 00:03:52.526 --rc geninfo_all_blocks=1 00:03:52.526 --rc geninfo_unexecuted_blocks=1 00:03:52.526 00:03:52.526 ' 00:03:52.526 09:42:02 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:52.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.526 --rc genhtml_branch_coverage=1 00:03:52.526 --rc genhtml_function_coverage=1 00:03:52.526 --rc genhtml_legend=1 00:03:52.526 --rc geninfo_all_blocks=1 00:03:52.526 --rc geninfo_unexecuted_blocks=1 00:03:52.526 00:03:52.526 ' 00:03:52.526 09:42:02 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:52.526 09:42:02 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:52.526 09:42:02 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:52.526 09:42:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.526 09:42:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.526 09:42:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.785 ************************************ 00:03:52.785 START TEST skip_rpc 00:03:52.785 ************************************ 00:03:52.785 09:42:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:52.785 09:42:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4065717 00:03:52.785 09:42:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:52.785 09:42:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:52.785 09:42:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:52.785 [2024-12-11 09:42:02.156436] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:03:52.785 [2024-12-11 09:42:02.156477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4065717 ] 00:03:52.785 [2024-12-11 09:42:02.235052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.785 [2024-12-11 09:42:02.274742] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4065717 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 4065717 ']' 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 4065717 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4065717 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4065717' 00:03:58.055 killing process with pid 4065717 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 4065717 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 4065717 00:03:58.055 00:03:58.055 real 0m5.362s 00:03:58.055 user 0m5.114s 00:03:58.055 sys 0m0.279s 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.055 09:42:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.055 ************************************ 00:03:58.055 END TEST skip_rpc 00:03:58.055 ************************************ 00:03:58.055 09:42:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:58.055 09:42:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.055 09:42:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.055 09:42:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.055 ************************************ 00:03:58.055 START TEST skip_rpc_with_json 00:03:58.055 ************************************ 00:03:58.055 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:58.055 09:42:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:58.055 09:42:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4066646 00:03:58.055 09:42:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.055 09:42:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:58.055 09:42:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4066646 00:03:58.055 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 4066646 ']' 00:03:58.055 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:58.055 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:58.055 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:58.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:58.055 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:58.055 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.055 [2024-12-11 09:42:07.588921] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:03:58.055 [2024-12-11 09:42:07.588965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4066646 ] 00:03:58.314 [2024-12-11 09:42:07.669748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.314 [2024-12-11 09:42:07.710137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.573 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:58.573 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:58.573 09:42:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:58.573 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.573 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.573 [2024-12-11 09:42:07.923948] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:58.573 request: 00:03:58.573 { 00:03:58.573 "trtype": "tcp", 00:03:58.573 "method": "nvmf_get_transports", 00:03:58.573 "req_id": 1 00:03:58.573 } 00:03:58.573 Got JSON-RPC error response 00:03:58.573 response: 00:03:58.573 { 00:03:58.573 "code": -19, 00:03:58.573 "message": "No such device" 00:03:58.573 } 00:03:58.573 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:58.573 09:42:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:58.573 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.573 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.573 [2024-12-11 09:42:07.936056] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:58.573 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.573 09:42:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:58.573 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.573 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.573 09:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.573 09:42:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:58.573 { 00:03:58.573 "subsystems": [ 00:03:58.573 { 00:03:58.573 "subsystem": "fsdev", 00:03:58.573 "config": [ 00:03:58.573 { 00:03:58.573 "method": "fsdev_set_opts", 00:03:58.573 "params": { 00:03:58.573 "fsdev_io_pool_size": 65535, 00:03:58.573 "fsdev_io_cache_size": 256 00:03:58.573 } 00:03:58.573 } 00:03:58.573 ] 00:03:58.573 }, 00:03:58.573 { 00:03:58.573 "subsystem": "vfio_user_target", 00:03:58.573 "config": null 00:03:58.573 }, 00:03:58.573 { 00:03:58.573 "subsystem": "keyring", 00:03:58.573 "config": [] 00:03:58.573 }, 00:03:58.573 { 00:03:58.573 "subsystem": "iobuf", 00:03:58.573 "config": [ 00:03:58.573 { 00:03:58.573 "method": "iobuf_set_options", 00:03:58.573 "params": { 00:03:58.573 "small_pool_count": 8192, 00:03:58.573 "large_pool_count": 1024, 00:03:58.573 "small_bufsize": 8192, 00:03:58.573 "large_bufsize": 135168, 00:03:58.573 "enable_numa": false 00:03:58.573 } 00:03:58.573 } 00:03:58.573 ] 00:03:58.573 }, 00:03:58.573 { 00:03:58.573 "subsystem": "sock", 00:03:58.573 "config": [ 00:03:58.573 { 00:03:58.573 "method": "sock_set_default_impl", 00:03:58.573 "params": { 00:03:58.573 "impl_name": "posix" 00:03:58.573 } 00:03:58.573 }, 00:03:58.573 { 00:03:58.573 "method": "sock_impl_set_options", 00:03:58.573 "params": { 00:03:58.574 "impl_name": "ssl", 00:03:58.574 "recv_buf_size": 4096, 00:03:58.574 "send_buf_size": 4096, 00:03:58.574 "enable_recv_pipe": true, 00:03:58.574 "enable_quickack": false, 00:03:58.574 "enable_placement_id": 0, 00:03:58.574 "enable_zerocopy_send_server": true, 00:03:58.574 "enable_zerocopy_send_client": false, 00:03:58.574 "zerocopy_threshold": 0, 00:03:58.574 "tls_version": 0, 00:03:58.574 "enable_ktls": false 00:03:58.574 } 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "method": "sock_impl_set_options", 00:03:58.574 "params": { 00:03:58.574 "impl_name": "posix", 00:03:58.574 "recv_buf_size": 2097152, 00:03:58.574 "send_buf_size": 2097152, 00:03:58.574 "enable_recv_pipe": true, 00:03:58.574 "enable_quickack": false, 00:03:58.574 "enable_placement_id": 0, 00:03:58.574 "enable_zerocopy_send_server": true, 00:03:58.574 "enable_zerocopy_send_client": false, 00:03:58.574 "zerocopy_threshold": 0, 00:03:58.574 "tls_version": 0, 00:03:58.574 "enable_ktls": false 00:03:58.574 } 00:03:58.574 } 00:03:58.574 ] 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "subsystem": "vmd", 00:03:58.574 "config": [] 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "subsystem": "accel", 00:03:58.574 "config": [ 00:03:58.574 { 00:03:58.574 "method": "accel_set_options", 00:03:58.574 "params": { 00:03:58.574 "small_cache_size": 128, 00:03:58.574 "large_cache_size": 16, 00:03:58.574 "task_count": 2048, 00:03:58.574 "sequence_count": 2048, 00:03:58.574 "buf_count": 2048 00:03:58.574 } 00:03:58.574 } 00:03:58.574 ] 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "subsystem": "bdev", 00:03:58.574 "config": [ 00:03:58.574 { 00:03:58.574 "method": "bdev_set_options", 00:03:58.574 "params": { 00:03:58.574 "bdev_io_pool_size": 65535, 00:03:58.574 "bdev_io_cache_size": 256, 00:03:58.574 "bdev_auto_examine": true, 00:03:58.574 "iobuf_small_cache_size": 128, 00:03:58.574 "iobuf_large_cache_size": 16 00:03:58.574 } 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "method": "bdev_raid_set_options", 00:03:58.574 "params": { 00:03:58.574 "process_window_size_kb": 1024, 00:03:58.574 "process_max_bandwidth_mb_sec": 0 00:03:58.574 } 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "method": "bdev_iscsi_set_options", 00:03:58.574 "params": { 00:03:58.574 "timeout_sec": 30 00:03:58.574 } 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "method": "bdev_nvme_set_options", 00:03:58.574 "params": { 00:03:58.574 "action_on_timeout": "none", 00:03:58.574 "timeout_us": 0, 00:03:58.574 "timeout_admin_us": 0, 00:03:58.574 "keep_alive_timeout_ms": 10000, 00:03:58.574 "arbitration_burst": 0, 00:03:58.574 "low_priority_weight": 0, 00:03:58.574 "medium_priority_weight": 0, 00:03:58.574 "high_priority_weight": 0, 00:03:58.574 "nvme_adminq_poll_period_us": 10000, 00:03:58.574 "nvme_ioq_poll_period_us": 0, 00:03:58.574 "io_queue_requests": 0, 00:03:58.574 "delay_cmd_submit": true, 00:03:58.574 "transport_retry_count": 4, 00:03:58.574 "bdev_retry_count": 3, 00:03:58.574 "transport_ack_timeout": 0, 00:03:58.574 "ctrlr_loss_timeout_sec": 0, 00:03:58.574 "reconnect_delay_sec": 0, 00:03:58.574 "fast_io_fail_timeout_sec": 0, 00:03:58.574 "disable_auto_failback": false, 00:03:58.574 "generate_uuids": false, 00:03:58.574 "transport_tos": 0, 00:03:58.574 "nvme_error_stat": false, 00:03:58.574 "rdma_srq_size": 0, 00:03:58.574 "io_path_stat": false, 00:03:58.574 "allow_accel_sequence": false, 00:03:58.574 "rdma_max_cq_size": 0, 00:03:58.574 "rdma_cm_event_timeout_ms": 0, 00:03:58.574 "dhchap_digests": [ 00:03:58.574 "sha256", 00:03:58.574 "sha384", 00:03:58.574 "sha512" 00:03:58.574 ], 00:03:58.574 "dhchap_dhgroups": [ 00:03:58.574 "null", 00:03:58.574 "ffdhe2048", 00:03:58.574 "ffdhe3072", 00:03:58.574 "ffdhe4096", 00:03:58.574 "ffdhe6144", 00:03:58.574 "ffdhe8192" 00:03:58.574 ], 00:03:58.574 "rdma_umr_per_io": false 00:03:58.574 } 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "method": "bdev_nvme_set_hotplug", 00:03:58.574 "params": { 00:03:58.574 "period_us": 100000, 00:03:58.574 "enable": false 00:03:58.574 } 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "method": "bdev_wait_for_examine" 00:03:58.574 } 00:03:58.574 ] 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "subsystem": "scsi", 00:03:58.574 "config": null 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "subsystem": "scheduler", 00:03:58.574 "config": [ 00:03:58.574 { 00:03:58.574 "method": "framework_set_scheduler", 00:03:58.574 "params": { 00:03:58.574 "name": "static" 00:03:58.574 } 00:03:58.574 } 00:03:58.574 ] 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "subsystem": "vhost_scsi", 00:03:58.574 "config": [] 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "subsystem": "vhost_blk", 00:03:58.574 "config": [] 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "subsystem": "ublk", 00:03:58.574 "config": [] 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "subsystem": "nbd", 00:03:58.574 "config": [] 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "subsystem": "nvmf", 00:03:58.574 "config": [ 00:03:58.574 { 00:03:58.574 "method": "nvmf_set_config", 00:03:58.574 "params": { 00:03:58.574 "discovery_filter": "match_any", 00:03:58.574 "admin_cmd_passthru": { 00:03:58.574 "identify_ctrlr": false 00:03:58.574 }, 00:03:58.574 "dhchap_digests": [ 00:03:58.574 "sha256", 00:03:58.574 "sha384", 00:03:58.574 "sha512" 00:03:58.574 ], 00:03:58.574 "dhchap_dhgroups": [ 00:03:58.574 "null", 00:03:58.574 "ffdhe2048", 00:03:58.574 "ffdhe3072", 00:03:58.574 "ffdhe4096", 00:03:58.574 "ffdhe6144", 00:03:58.574 "ffdhe8192" 00:03:58.574 ] 00:03:58.574 } 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "method": "nvmf_set_max_subsystems", 00:03:58.574 "params": { 00:03:58.574 "max_subsystems": 1024 00:03:58.574 } 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "method": "nvmf_set_crdt", 00:03:58.574 "params": { 00:03:58.574 "crdt1": 0, 00:03:58.574 "crdt2": 0, 00:03:58.574 "crdt3": 0 00:03:58.574 } 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "method": "nvmf_create_transport", 00:03:58.574 "params": { 00:03:58.574 "trtype": "TCP", 00:03:58.574 "max_queue_depth": 128, 00:03:58.574 "max_io_qpairs_per_ctrlr": 127, 00:03:58.574 "in_capsule_data_size": 4096, 00:03:58.574 "max_io_size": 131072, 00:03:58.574 "io_unit_size": 131072, 00:03:58.574 "max_aq_depth": 128, 00:03:58.574 "num_shared_buffers": 511, 00:03:58.574 "buf_cache_size": 4294967295, 00:03:58.574 "dif_insert_or_strip": false, 00:03:58.574 "zcopy": false, 00:03:58.574 "c2h_success": true, 00:03:58.574 "sock_priority": 0, 00:03:58.574 "abort_timeout_sec": 1, 00:03:58.574 "ack_timeout": 0, 00:03:58.574 "data_wr_pool_size": 0 00:03:58.574 } 00:03:58.574 } 00:03:58.574 ] 00:03:58.574 }, 00:03:58.574 { 00:03:58.574 "subsystem": "iscsi", 00:03:58.574 "config": [ 00:03:58.574 { 00:03:58.574 "method": "iscsi_set_options", 00:03:58.574 "params": { 00:03:58.574 "node_base": "iqn.2016-06.io.spdk", 00:03:58.574 "max_sessions": 128, 00:03:58.574 "max_connections_per_session": 2, 00:03:58.574 "max_queue_depth": 64, 00:03:58.574 "default_time2wait": 2, 00:03:58.574 "default_time2retain": 20, 00:03:58.574 "first_burst_length": 8192, 00:03:58.574 "immediate_data": true, 00:03:58.574 "allow_duplicated_isid": false, 00:03:58.574 "error_recovery_level": 0, 00:03:58.574 "nop_timeout": 60, 00:03:58.574 "nop_in_interval": 30, 00:03:58.574 "disable_chap": false, 00:03:58.574 "require_chap": false, 00:03:58.574 "mutual_chap": false, 00:03:58.574 "chap_group": 0, 00:03:58.574 "max_large_datain_per_connection": 64, 00:03:58.574 "max_r2t_per_connection": 4, 00:03:58.574 "pdu_pool_size": 36864, 00:03:58.574 "immediate_data_pool_size": 16384, 00:03:58.574 "data_out_pool_size": 2048 00:03:58.574 } 00:03:58.574 } 00:03:58.574 ] 00:03:58.574 } 00:03:58.574 ] 00:03:58.574 } 00:03:58.574 09:42:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:58.574 09:42:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4066646 00:03:58.574 09:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 4066646 ']' 00:03:58.574 09:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 4066646 00:03:58.574 09:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:58.574 09:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:58.574 09:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4066646 00:03:58.834 09:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:58.834 09:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:58.834 09:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4066646' 00:03:58.834 killing process with pid 4066646 00:03:58.834 09:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 4066646 00:03:58.834 09:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 4066646 00:03:59.093 09:42:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4066834 00:03:59.093 09:42:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:59.093 09:42:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4066834 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 4066834 ']' 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 4066834 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4066834 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4066834' 00:04:04.367 killing process with pid 4066834 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 4066834 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 4066834 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:04.367 00:04:04.367 real 0m6.280s 00:04:04.367 user 0m5.942s 00:04:04.367 sys 0m0.626s 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.367 ************************************ 00:04:04.367 END TEST skip_rpc_with_json 00:04:04.367 ************************************ 00:04:04.367 09:42:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:04.367 09:42:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.367 09:42:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.367 09:42:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.367 ************************************ 00:04:04.367 START TEST skip_rpc_with_delay 00:04:04.367 ************************************ 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:04.367 09:42:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.626 [2024-12-11 09:42:13.946368] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:04.626 09:42:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:04.626 09:42:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:04.626 09:42:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:04.626 09:42:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:04.626 00:04:04.626 real 0m0.071s 00:04:04.626 user 0m0.047s 00:04:04.626 sys 0m0.023s 00:04:04.626 09:42:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.626 09:42:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:04.626 ************************************ 00:04:04.626 END TEST skip_rpc_with_delay 00:04:04.626 ************************************ 00:04:04.626 09:42:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:04.626 09:42:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:04.626 09:42:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:04.626 09:42:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.626 09:42:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.626 09:42:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.626 ************************************ 00:04:04.626 START TEST exit_on_failed_rpc_init 00:04:04.626 ************************************ 00:04:04.626 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:04.626 09:42:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4068165 00:04:04.626 09:42:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4068165 00:04:04.626 09:42:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:04.626 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 4068165 ']' 00:04:04.626 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.626 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:04.627 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.627 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:04.627 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.627 [2024-12-11 09:42:14.082597] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:04.627 [2024-12-11 09:42:14.082640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4068165 ] 00:04:04.627 [2024-12-11 09:42:14.161687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.886 [2024-12-11 09:42:14.202604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.886 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:04.886 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:04.886 09:42:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.886 09:42:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:04.886 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:04.886 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:04.886 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.886 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.886 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.886 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.886 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.886 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.886 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.886 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:04.886 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:05.145 [2024-12-11 09:42:14.479980] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:05.145 [2024-12-11 09:42:14.480024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4068239 ] 00:04:05.145 [2024-12-11 09:42:14.556651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.145 [2024-12-11 09:42:14.595560] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:05.145 [2024-12-11 09:42:14.595612] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:05.145 [2024-12-11 09:42:14.595621] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:05.145 [2024-12-11 09:42:14.595626] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:05.145 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:05.145 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:05.145 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:05.145 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:05.145 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:05.145 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:05.145 09:42:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:05.145 09:42:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4068165 00:04:05.145 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 4068165 ']' 00:04:05.145 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 4068165 00:04:05.145 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:05.145 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:05.145 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4068165 00:04:05.145 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:05.145 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:05.145 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4068165' 00:04:05.145 killing process with pid 4068165 00:04:05.145 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 4068165 00:04:05.145 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 4068165 00:04:05.712 00:04:05.712 real 0m0.951s 00:04:05.712 user 0m1.017s 00:04:05.712 sys 0m0.390s 00:04:05.712 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.712 09:42:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:05.712 ************************************ 00:04:05.712 END TEST exit_on_failed_rpc_init 00:04:05.712 ************************************ 00:04:05.712 09:42:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:05.712 00:04:05.712 real 0m13.129s 00:04:05.712 user 0m12.334s 00:04:05.712 sys 0m1.600s 00:04:05.712 09:42:15 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.712 09:42:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.712 ************************************ 00:04:05.712 END TEST skip_rpc 00:04:05.712 ************************************ 00:04:05.712 09:42:15 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:05.712 09:42:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.712 09:42:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.712 09:42:15 -- common/autotest_common.sh@10 -- # set +x 00:04:05.712 ************************************ 00:04:05.712 START TEST rpc_client 00:04:05.712 ************************************ 00:04:05.712 09:42:15 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:05.712 * Looking for test storage... 00:04:05.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:05.712 09:42:15 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:05.712 09:42:15 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:05.712 09:42:15 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:05.712 09:42:15 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.712 09:42:15 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:05.712 09:42:15 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.712 09:42:15 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:05.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.712 --rc genhtml_branch_coverage=1 00:04:05.712 --rc genhtml_function_coverage=1 00:04:05.712 --rc genhtml_legend=1 00:04:05.712 --rc geninfo_all_blocks=1 00:04:05.712 --rc geninfo_unexecuted_blocks=1 00:04:05.712 00:04:05.712 ' 00:04:05.712 09:42:15 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:05.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.712 --rc genhtml_branch_coverage=1 00:04:05.712 --rc genhtml_function_coverage=1 00:04:05.713 --rc genhtml_legend=1 00:04:05.713 --rc geninfo_all_blocks=1 00:04:05.713 --rc geninfo_unexecuted_blocks=1 00:04:05.713 00:04:05.713 ' 00:04:05.713 09:42:15 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:05.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.713 --rc genhtml_branch_coverage=1 00:04:05.713 --rc genhtml_function_coverage=1 00:04:05.713 --rc genhtml_legend=1 00:04:05.713 --rc geninfo_all_blocks=1 00:04:05.713 --rc geninfo_unexecuted_blocks=1 00:04:05.713 00:04:05.713 ' 00:04:05.713 09:42:15 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:05.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.713 --rc genhtml_branch_coverage=1 00:04:05.713 --rc genhtml_function_coverage=1 00:04:05.713 --rc genhtml_legend=1 00:04:05.713 --rc geninfo_all_blocks=1 00:04:05.713 --rc geninfo_unexecuted_blocks=1 00:04:05.713 00:04:05.713 ' 00:04:05.713 09:42:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:05.713 OK 00:04:05.975 09:42:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:05.975 00:04:05.975 real 0m0.201s 00:04:05.975 user 0m0.117s 00:04:05.975 sys 0m0.096s 00:04:05.975 09:42:15 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.975 09:42:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:05.975 ************************************ 00:04:05.975 END TEST rpc_client 00:04:05.975 ************************************ 00:04:05.975 09:42:15 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:05.975 09:42:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.975 09:42:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.975 09:42:15 -- common/autotest_common.sh@10 -- # set +x 00:04:05.975 ************************************ 00:04:05.975 START TEST json_config 00:04:05.975 ************************************ 00:04:05.975 09:42:15 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:05.975 09:42:15 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:05.975 09:42:15 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:05.975 09:42:15 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:05.975 09:42:15 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:05.975 09:42:15 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.975 09:42:15 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.975 09:42:15 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.975 09:42:15 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.975 09:42:15 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.975 09:42:15 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.975 09:42:15 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.975 09:42:15 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.975 09:42:15 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.975 09:42:15 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.975 09:42:15 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.975 09:42:15 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:05.975 09:42:15 json_config -- scripts/common.sh@345 -- # : 1 00:04:05.975 09:42:15 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.975 09:42:15 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.975 09:42:15 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:05.975 09:42:15 json_config -- scripts/common.sh@353 -- # local d=1 00:04:05.975 09:42:15 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.975 09:42:15 json_config -- scripts/common.sh@355 -- # echo 1 00:04:05.975 09:42:15 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.975 09:42:15 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:05.975 09:42:15 json_config -- scripts/common.sh@353 -- # local d=2 00:04:05.975 09:42:15 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.975 09:42:15 json_config -- scripts/common.sh@355 -- # echo 2 00:04:05.975 09:42:15 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.975 09:42:15 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.975 09:42:15 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.975 09:42:15 json_config -- scripts/common.sh@368 -- # return 0 00:04:05.975 09:42:15 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.975 09:42:15 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:05.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.975 --rc genhtml_branch_coverage=1 00:04:05.975 --rc genhtml_function_coverage=1 00:04:05.975 --rc genhtml_legend=1 00:04:05.975 --rc geninfo_all_blocks=1 00:04:05.975 --rc geninfo_unexecuted_blocks=1 00:04:05.975 00:04:05.975 ' 00:04:05.975 09:42:15 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:05.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.975 --rc genhtml_branch_coverage=1 00:04:05.975 --rc genhtml_function_coverage=1 00:04:05.975 --rc genhtml_legend=1 00:04:05.975 --rc geninfo_all_blocks=1 00:04:05.975 --rc geninfo_unexecuted_blocks=1 00:04:05.975 00:04:05.975 ' 00:04:05.975 09:42:15 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:05.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.975 --rc genhtml_branch_coverage=1 00:04:05.975 --rc genhtml_function_coverage=1 00:04:05.975 --rc genhtml_legend=1 00:04:05.975 --rc geninfo_all_blocks=1 00:04:05.975 --rc geninfo_unexecuted_blocks=1 00:04:05.975 00:04:05.975 ' 00:04:05.975 09:42:15 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:05.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.975 --rc genhtml_branch_coverage=1 00:04:05.975 --rc genhtml_function_coverage=1 00:04:05.975 --rc genhtml_legend=1 00:04:05.975 --rc geninfo_all_blocks=1 00:04:05.975 --rc geninfo_unexecuted_blocks=1 00:04:05.975 00:04:05.975 ' 00:04:05.975 09:42:15 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:05.975 09:42:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:05.975 09:42:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:05.975 09:42:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:05.975 09:42:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:05.975 09:42:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:05.975 09:42:15 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:05.975 09:42:15 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:05.975 09:42:15 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:05.975 09:42:15 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:05.975 09:42:15 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:05.975 09:42:15 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:05.975 09:42:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:04:05.975 09:42:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:04:05.975 09:42:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:05.975 09:42:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:05.975 09:42:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:05.975 09:42:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:05.975 09:42:15 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:05.975 09:42:15 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:05.975 09:42:15 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:05.975 09:42:15 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:05.975 09:42:15 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:05.976 09:42:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.976 09:42:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.976 09:42:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.976 09:42:15 json_config -- paths/export.sh@5 -- # export PATH 00:04:05.976 09:42:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.976 09:42:15 json_config -- nvmf/common.sh@51 -- # : 0 00:04:05.976 09:42:15 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:05.976 09:42:15 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:05.976 09:42:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:05.976 09:42:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:05.976 09:42:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:05.976 09:42:15 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:05.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:05.976 09:42:15 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:05.976 09:42:15 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:05.976 09:42:15 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:05.976 09:42:15 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:05.976 09:42:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:05.976 09:42:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:05.976 09:42:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:05.976 09:42:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:05.976 09:42:15 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:05.976 09:42:15 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:05.976 09:42:15 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:05.976 09:42:15 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:05.976 09:42:15 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:05.976 09:42:15 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:05.976 09:42:15 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:05.976 09:42:15 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:05.976 09:42:15 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:05.976 09:42:15 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:05.976 09:42:15 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:05.976 INFO: JSON configuration test init 00:04:05.976 09:42:15 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:05.976 09:42:15 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:05.976 09:42:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.976 09:42:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.235 09:42:15 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:06.235 09:42:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.235 09:42:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.235 09:42:15 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:06.235 09:42:15 json_config -- json_config/common.sh@9 -- # local app=target 00:04:06.235 09:42:15 json_config -- json_config/common.sh@10 -- # shift 00:04:06.235 09:42:15 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:06.235 09:42:15 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:06.235 09:42:15 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:06.235 09:42:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.235 09:42:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.235 09:42:15 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4068588 00:04:06.235 09:42:15 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:06.235 Waiting for target to run... 00:04:06.235 09:42:15 json_config -- json_config/common.sh@25 -- # waitforlisten 4068588 /var/tmp/spdk_tgt.sock 00:04:06.235 09:42:15 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:06.235 09:42:15 json_config -- common/autotest_common.sh@835 -- # '[' -z 4068588 ']' 00:04:06.235 09:42:15 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:06.235 09:42:15 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.235 09:42:15 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:06.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:06.235 09:42:15 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.235 09:42:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.235 [2024-12-11 09:42:15.607994] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:06.236 [2024-12-11 09:42:15.608044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4068588 ] 00:04:06.494 [2024-12-11 09:42:15.892552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.494 [2024-12-11 09:42:15.924997] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.062 09:42:16 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.062 09:42:16 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:07.062 09:42:16 json_config -- json_config/common.sh@26 -- # echo '' 00:04:07.062 00:04:07.062 09:42:16 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:07.062 09:42:16 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:07.062 09:42:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.062 09:42:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.062 09:42:16 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:07.062 09:42:16 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:07.062 09:42:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.062 09:42:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.062 09:42:16 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:07.062 09:42:16 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:07.062 09:42:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:10.350 09:42:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.350 09:42:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:10.350 09:42:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@54 -- # sort 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:10.350 09:42:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:10.350 09:42:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:10.350 09:42:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.350 09:42:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:10.350 09:42:19 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:10.350 09:42:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:10.609 MallocForNvmf0 00:04:10.609 09:42:20 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:10.609 09:42:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:10.868 MallocForNvmf1 00:04:10.868 09:42:20 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:10.868 09:42:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:10.868 [2024-12-11 09:42:20.389261] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:10.868 09:42:20 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:10.868 09:42:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:11.126 09:42:20 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:11.126 09:42:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:11.385 09:42:20 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:11.385 09:42:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:11.644 09:42:21 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:11.644 09:42:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:11.644 [2024-12-11 09:42:21.175647] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:11.644 09:42:21 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:11.644 09:42:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.644 09:42:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.902 09:42:21 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:11.902 09:42:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.902 09:42:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.902 09:42:21 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:11.902 09:42:21 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:11.902 09:42:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:11.902 MallocBdevForConfigChangeCheck 00:04:11.902 09:42:21 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:11.902 09:42:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.903 09:42:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.161 09:42:21 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:12.161 09:42:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:12.420 09:42:21 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:12.420 INFO: shutting down applications... 00:04:12.420 09:42:21 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:12.420 09:42:21 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:12.420 09:42:21 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:12.420 09:42:21 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:14.324 Calling clear_iscsi_subsystem 00:04:14.324 Calling clear_nvmf_subsystem 00:04:14.324 Calling clear_nbd_subsystem 00:04:14.325 Calling clear_ublk_subsystem 00:04:14.325 Calling clear_vhost_blk_subsystem 00:04:14.325 Calling clear_vhost_scsi_subsystem 00:04:14.325 Calling clear_bdev_subsystem 00:04:14.325 09:42:23 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:14.325 09:42:23 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:14.325 09:42:23 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:14.325 09:42:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:14.325 09:42:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:14.325 09:42:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:14.325 09:42:23 json_config -- json_config/json_config.sh@352 -- # break 00:04:14.325 09:42:23 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:14.325 09:42:23 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:14.325 09:42:23 json_config -- json_config/common.sh@31 -- # local app=target 00:04:14.325 09:42:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:14.325 09:42:23 json_config -- json_config/common.sh@35 -- # [[ -n 4068588 ]] 00:04:14.325 09:42:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 4068588 00:04:14.325 09:42:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:14.325 09:42:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:14.325 09:42:23 json_config -- json_config/common.sh@41 -- # kill -0 4068588 00:04:14.325 09:42:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:14.893 09:42:24 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:14.893 09:42:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:14.893 09:42:24 json_config -- json_config/common.sh@41 -- # kill -0 4068588 00:04:14.893 09:42:24 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:14.893 09:42:24 json_config -- json_config/common.sh@43 -- # break 00:04:14.893 09:42:24 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:14.893 09:42:24 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:14.893 SPDK target shutdown done 00:04:14.893 09:42:24 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:14.893 INFO: relaunching applications... 00:04:14.893 09:42:24 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.893 09:42:24 json_config -- json_config/common.sh@9 -- # local app=target 00:04:14.893 09:42:24 json_config -- json_config/common.sh@10 -- # shift 00:04:14.893 09:42:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:14.893 09:42:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:14.893 09:42:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:14.893 09:42:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.893 09:42:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.893 09:42:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4070082 00:04:14.893 09:42:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:14.893 Waiting for target to run... 00:04:14.893 09:42:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.893 09:42:24 json_config -- json_config/common.sh@25 -- # waitforlisten 4070082 /var/tmp/spdk_tgt.sock 00:04:14.893 09:42:24 json_config -- common/autotest_common.sh@835 -- # '[' -z 4070082 ']' 00:04:14.893 09:42:24 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:14.893 09:42:24 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.893 09:42:24 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:14.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:14.893 09:42:24 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.893 09:42:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.893 [2024-12-11 09:42:24.385929] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:14.893 [2024-12-11 09:42:24.385987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4070082 ] 00:04:15.461 [2024-12-11 09:42:24.842445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.461 [2024-12-11 09:42:24.887235] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.751 [2024-12-11 09:42:27.918975] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:18.751 [2024-12-11 09:42:27.951255] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:19.318 09:42:28 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.319 09:42:28 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:19.319 09:42:28 json_config -- json_config/common.sh@26 -- # echo '' 00:04:19.319 00:04:19.319 09:42:28 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:19.319 09:42:28 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:19.319 INFO: Checking if target configuration is the same... 00:04:19.319 09:42:28 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.319 09:42:28 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:19.319 09:42:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.319 + '[' 2 -ne 2 ']' 00:04:19.319 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:19.319 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:19.319 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:19.319 +++ basename /dev/fd/62 00:04:19.319 ++ mktemp /tmp/62.XXX 00:04:19.319 + tmp_file_1=/tmp/62.ZiS 00:04:19.319 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.319 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:19.319 + tmp_file_2=/tmp/spdk_tgt_config.json.w99 00:04:19.319 + ret=0 00:04:19.319 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:19.578 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:19.578 + diff -u /tmp/62.ZiS /tmp/spdk_tgt_config.json.w99 00:04:19.578 + echo 'INFO: JSON config files are the same' 00:04:19.578 INFO: JSON config files are the same 00:04:19.578 + rm /tmp/62.ZiS /tmp/spdk_tgt_config.json.w99 00:04:19.578 + exit 0 00:04:19.578 09:42:29 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:19.578 09:42:29 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:19.578 INFO: changing configuration and checking if this can be detected... 00:04:19.578 09:42:29 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:19.578 09:42:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:19.837 09:42:29 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.837 09:42:29 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:19.837 09:42:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.837 + '[' 2 -ne 2 ']' 00:04:19.837 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:19.837 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:19.837 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:19.837 +++ basename /dev/fd/62 00:04:19.837 ++ mktemp /tmp/62.XXX 00:04:19.837 + tmp_file_1=/tmp/62.25W 00:04:19.838 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.838 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:19.838 + tmp_file_2=/tmp/spdk_tgt_config.json.pAi 00:04:19.838 + ret=0 00:04:19.838 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.097 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.097 + diff -u /tmp/62.25W /tmp/spdk_tgt_config.json.pAi 00:04:20.097 + ret=1 00:04:20.097 + echo '=== Start of file: /tmp/62.25W ===' 00:04:20.097 + cat /tmp/62.25W 00:04:20.097 + echo '=== End of file: /tmp/62.25W ===' 00:04:20.097 + echo '' 00:04:20.097 + echo '=== Start of file: /tmp/spdk_tgt_config.json.pAi ===' 00:04:20.097 + cat /tmp/spdk_tgt_config.json.pAi 00:04:20.097 + echo '=== End of file: /tmp/spdk_tgt_config.json.pAi ===' 00:04:20.097 + echo '' 00:04:20.097 + rm /tmp/62.25W /tmp/spdk_tgt_config.json.pAi 00:04:20.097 + exit 1 00:04:20.097 09:42:29 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:20.097 INFO: configuration change detected. 00:04:20.097 09:42:29 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:20.097 09:42:29 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:20.097 09:42:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.097 09:42:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.097 09:42:29 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:20.097 09:42:29 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:20.097 09:42:29 json_config -- json_config/json_config.sh@324 -- # [[ -n 4070082 ]] 00:04:20.097 09:42:29 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:20.097 09:42:29 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:20.097 09:42:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.097 09:42:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.097 09:42:29 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:20.097 09:42:29 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:20.097 09:42:29 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:20.097 09:42:29 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:20.097 09:42:29 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:20.097 09:42:29 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:20.097 09:42:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:20.097 09:42:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.097 09:42:29 json_config -- json_config/json_config.sh@330 -- # killprocess 4070082 00:04:20.097 09:42:29 json_config -- common/autotest_common.sh@954 -- # '[' -z 4070082 ']' 00:04:20.097 09:42:29 json_config -- common/autotest_common.sh@958 -- # kill -0 4070082 00:04:20.097 09:42:29 json_config -- common/autotest_common.sh@959 -- # uname 00:04:20.097 09:42:29 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.097 09:42:29 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4070082 00:04:20.357 09:42:29 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.357 09:42:29 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.357 09:42:29 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4070082' 00:04:20.357 killing process with pid 4070082 00:04:20.357 09:42:29 json_config -- common/autotest_common.sh@973 -- # kill 4070082 00:04:20.357 09:42:29 json_config -- common/autotest_common.sh@978 -- # wait 4070082 00:04:21.736 09:42:31 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:21.736 09:42:31 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:21.736 09:42:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.736 09:42:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.736 09:42:31 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:21.736 09:42:31 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:21.736 INFO: Success 00:04:21.736 00:04:21.736 real 0m15.855s 00:04:21.736 user 0m16.484s 00:04:21.736 sys 0m2.548s 00:04:21.736 09:42:31 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.736 09:42:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.736 ************************************ 00:04:21.736 END TEST json_config 00:04:21.736 ************************************ 00:04:21.736 09:42:31 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:21.736 09:42:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.736 09:42:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.736 09:42:31 -- common/autotest_common.sh@10 -- # set +x 00:04:21.736 ************************************ 00:04:21.736 START TEST json_config_extra_key 00:04:21.736 ************************************ 00:04:21.736 09:42:31 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:21.996 09:42:31 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:21.996 09:42:31 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:21.996 09:42:31 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:21.996 09:42:31 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:21.996 09:42:31 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.996 09:42:31 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:21.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.996 --rc genhtml_branch_coverage=1 00:04:21.996 --rc genhtml_function_coverage=1 00:04:21.996 --rc genhtml_legend=1 00:04:21.996 --rc geninfo_all_blocks=1 00:04:21.996 --rc geninfo_unexecuted_blocks=1 00:04:21.996 00:04:21.996 ' 00:04:21.996 09:42:31 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:21.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.996 --rc genhtml_branch_coverage=1 00:04:21.996 --rc genhtml_function_coverage=1 00:04:21.996 --rc genhtml_legend=1 00:04:21.996 --rc geninfo_all_blocks=1 00:04:21.996 --rc geninfo_unexecuted_blocks=1 00:04:21.996 00:04:21.996 ' 00:04:21.996 09:42:31 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:21.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.996 --rc genhtml_branch_coverage=1 00:04:21.996 --rc genhtml_function_coverage=1 00:04:21.996 --rc genhtml_legend=1 00:04:21.996 --rc geninfo_all_blocks=1 00:04:21.996 --rc geninfo_unexecuted_blocks=1 00:04:21.996 00:04:21.996 ' 00:04:21.996 09:42:31 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:21.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.996 --rc genhtml_branch_coverage=1 00:04:21.996 --rc genhtml_function_coverage=1 00:04:21.996 --rc genhtml_legend=1 00:04:21.996 --rc geninfo_all_blocks=1 00:04:21.996 --rc geninfo_unexecuted_blocks=1 00:04:21.996 00:04:21.996 ' 00:04:21.996 09:42:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:21.996 09:42:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:21.996 09:42:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:21.996 09:42:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:21.996 09:42:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:21.996 09:42:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:21.996 09:42:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:21.996 09:42:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:21.996 09:42:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:21.996 09:42:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:21.996 09:42:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:21.996 09:42:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:21.996 09:42:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:04:21.996 09:42:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:04:21.996 09:42:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:21.996 09:42:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:21.996 09:42:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:21.996 09:42:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:21.996 09:42:31 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.996 09:42:31 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.996 09:42:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.996 09:42:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.997 09:42:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.997 09:42:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:21.997 09:42:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.997 09:42:31 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:21.997 09:42:31 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:21.997 09:42:31 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:21.997 09:42:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:21.997 09:42:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:21.997 09:42:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:21.997 09:42:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:21.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:21.997 09:42:31 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:21.997 09:42:31 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:21.997 09:42:31 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:21.997 09:42:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:21.997 09:42:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:21.997 09:42:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:21.997 09:42:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:21.997 09:42:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:21.997 09:42:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:21.997 09:42:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:21.997 09:42:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:21.997 09:42:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:21.997 09:42:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:21.997 09:42:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:21.997 INFO: launching applications... 00:04:21.997 09:42:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:21.997 09:42:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:21.997 09:42:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:21.997 09:42:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:21.997 09:42:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:21.997 09:42:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:21.997 09:42:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.997 09:42:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.997 09:42:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4071450 00:04:21.997 09:42:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:21.997 Waiting for target to run... 00:04:21.997 09:42:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4071450 /var/tmp/spdk_tgt.sock 00:04:21.997 09:42:31 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 4071450 ']' 00:04:21.997 09:42:31 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:21.997 09:42:31 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:21.997 09:42:31 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.997 09:42:31 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:21.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:21.997 09:42:31 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.997 09:42:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:21.997 [2024-12-11 09:42:31.520927] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:21.997 [2024-12-11 09:42:31.520978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071450 ] 00:04:22.566 [2024-12-11 09:42:31.976240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.566 [2024-12-11 09:42:32.030852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.826 09:42:32 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.826 09:42:32 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:22.826 09:42:32 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:22.826 00:04:22.826 09:42:32 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:22.826 INFO: shutting down applications... 00:04:22.826 09:42:32 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:22.826 09:42:32 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:22.826 09:42:32 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:22.826 09:42:32 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4071450 ]] 00:04:22.826 09:42:32 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4071450 00:04:22.826 09:42:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:22.826 09:42:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:22.826 09:42:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4071450 00:04:22.826 09:42:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:23.396 09:42:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:23.396 09:42:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.396 09:42:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4071450 00:04:23.396 09:42:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:23.396 09:42:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:23.396 09:42:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:23.396 09:42:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:23.396 SPDK target shutdown done 00:04:23.396 09:42:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:23.396 Success 00:04:23.396 00:04:23.396 real 0m1.579s 00:04:23.396 user 0m1.188s 00:04:23.396 sys 0m0.578s 00:04:23.396 09:42:32 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.396 09:42:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:23.396 ************************************ 00:04:23.396 END TEST json_config_extra_key 00:04:23.396 ************************************ 00:04:23.396 09:42:32 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:23.396 09:42:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.396 09:42:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.396 09:42:32 -- common/autotest_common.sh@10 -- # set +x 00:04:23.396 ************************************ 00:04:23.396 START TEST alias_rpc 00:04:23.396 ************************************ 00:04:23.396 09:42:32 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:23.655 * Looking for test storage... 00:04:23.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:23.655 09:42:33 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:23.655 09:42:33 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:23.655 09:42:33 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:23.655 09:42:33 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:23.655 09:42:33 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.655 09:42:33 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.655 09:42:33 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.655 09:42:33 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.655 09:42:33 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.655 09:42:33 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.655 09:42:33 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.655 09:42:33 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.655 09:42:33 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.655 09:42:33 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.655 09:42:33 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.655 09:42:33 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:23.655 09:42:33 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:23.656 09:42:33 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.656 09:42:33 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.656 09:42:33 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:23.656 09:42:33 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:23.656 09:42:33 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.656 09:42:33 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:23.656 09:42:33 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.656 09:42:33 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:23.656 09:42:33 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:23.656 09:42:33 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.656 09:42:33 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:23.656 09:42:33 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.656 09:42:33 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.656 09:42:33 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.656 09:42:33 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:23.656 09:42:33 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.656 09:42:33 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:23.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.656 --rc genhtml_branch_coverage=1 00:04:23.656 --rc genhtml_function_coverage=1 00:04:23.656 --rc genhtml_legend=1 00:04:23.656 --rc geninfo_all_blocks=1 00:04:23.656 --rc geninfo_unexecuted_blocks=1 00:04:23.656 00:04:23.656 ' 00:04:23.656 09:42:33 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:23.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.656 --rc genhtml_branch_coverage=1 00:04:23.656 --rc genhtml_function_coverage=1 00:04:23.656 --rc genhtml_legend=1 00:04:23.656 --rc geninfo_all_blocks=1 00:04:23.656 --rc geninfo_unexecuted_blocks=1 00:04:23.656 00:04:23.656 ' 00:04:23.656 09:42:33 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:23.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.656 --rc genhtml_branch_coverage=1 00:04:23.656 --rc genhtml_function_coverage=1 00:04:23.656 --rc genhtml_legend=1 00:04:23.656 --rc geninfo_all_blocks=1 00:04:23.656 --rc geninfo_unexecuted_blocks=1 00:04:23.656 00:04:23.656 ' 00:04:23.656 09:42:33 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:23.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.656 --rc genhtml_branch_coverage=1 00:04:23.656 --rc genhtml_function_coverage=1 00:04:23.656 --rc genhtml_legend=1 00:04:23.656 --rc geninfo_all_blocks=1 00:04:23.656 --rc geninfo_unexecuted_blocks=1 00:04:23.656 00:04:23.656 ' 00:04:23.656 09:42:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:23.656 09:42:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4071847 00:04:23.656 09:42:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4071847 00:04:23.656 09:42:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.656 09:42:33 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 4071847 ']' 00:04:23.656 09:42:33 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.656 09:42:33 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.656 09:42:33 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.656 09:42:33 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.656 09:42:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.656 [2024-12-11 09:42:33.163960] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:23.656 [2024-12-11 09:42:33.164008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071847 ] 00:04:23.915 [2024-12-11 09:42:33.240413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.915 [2024-12-11 09:42:33.278492] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.174 09:42:33 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.174 09:42:33 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:24.174 09:42:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:24.174 09:42:33 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4071847 00:04:24.174 09:42:33 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 4071847 ']' 00:04:24.174 09:42:33 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 4071847 00:04:24.174 09:42:33 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:24.174 09:42:33 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.174 09:42:33 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4071847 00:04:24.434 09:42:33 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.434 09:42:33 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.434 09:42:33 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4071847' 00:04:24.434 killing process with pid 4071847 00:04:24.434 09:42:33 alias_rpc -- common/autotest_common.sh@973 -- # kill 4071847 00:04:24.434 09:42:33 alias_rpc -- common/autotest_common.sh@978 -- # wait 4071847 00:04:24.693 00:04:24.693 real 0m1.129s 00:04:24.693 user 0m1.148s 00:04:24.693 sys 0m0.412s 00:04:24.693 09:42:34 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.693 09:42:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.693 ************************************ 00:04:24.693 END TEST alias_rpc 00:04:24.693 ************************************ 00:04:24.693 09:42:34 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:24.693 09:42:34 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:24.693 09:42:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.693 09:42:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.693 09:42:34 -- common/autotest_common.sh@10 -- # set +x 00:04:24.693 ************************************ 00:04:24.693 START TEST spdkcli_tcp 00:04:24.693 ************************************ 00:04:24.693 09:42:34 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:24.693 * Looking for test storage... 00:04:24.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:24.693 09:42:34 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:24.693 09:42:34 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:24.693 09:42:34 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:24.953 09:42:34 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.953 09:42:34 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:24.953 09:42:34 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.953 09:42:34 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:24.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.953 --rc genhtml_branch_coverage=1 00:04:24.953 --rc genhtml_function_coverage=1 00:04:24.953 --rc genhtml_legend=1 00:04:24.953 --rc geninfo_all_blocks=1 00:04:24.953 --rc geninfo_unexecuted_blocks=1 00:04:24.953 00:04:24.953 ' 00:04:24.953 09:42:34 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:24.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.953 --rc genhtml_branch_coverage=1 00:04:24.953 --rc genhtml_function_coverage=1 00:04:24.953 --rc genhtml_legend=1 00:04:24.953 --rc geninfo_all_blocks=1 00:04:24.953 --rc geninfo_unexecuted_blocks=1 00:04:24.953 00:04:24.953 ' 00:04:24.953 09:42:34 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:24.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.953 --rc genhtml_branch_coverage=1 00:04:24.953 --rc genhtml_function_coverage=1 00:04:24.953 --rc genhtml_legend=1 00:04:24.953 --rc geninfo_all_blocks=1 00:04:24.953 --rc geninfo_unexecuted_blocks=1 00:04:24.953 00:04:24.953 ' 00:04:24.953 09:42:34 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:24.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.953 --rc genhtml_branch_coverage=1 00:04:24.953 --rc genhtml_function_coverage=1 00:04:24.953 --rc genhtml_legend=1 00:04:24.953 --rc geninfo_all_blocks=1 00:04:24.953 --rc geninfo_unexecuted_blocks=1 00:04:24.954 00:04:24.954 ' 00:04:24.954 09:42:34 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:24.954 09:42:34 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:24.954 09:42:34 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:24.954 09:42:34 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:24.954 09:42:34 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:24.954 09:42:34 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:24.954 09:42:34 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:24.954 09:42:34 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.954 09:42:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:24.954 09:42:34 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4072073 00:04:24.954 09:42:34 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4072073 00:04:24.954 09:42:34 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:24.954 09:42:34 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 4072073 ']' 00:04:24.954 09:42:34 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.954 09:42:34 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.954 09:42:34 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.954 09:42:34 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.954 09:42:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:24.954 [2024-12-11 09:42:34.363613] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:24.954 [2024-12-11 09:42:34.363663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4072073 ] 00:04:24.954 [2024-12-11 09:42:34.443422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.954 [2024-12-11 09:42:34.484997] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.954 [2024-12-11 09:42:34.485000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.891 09:42:35 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.891 09:42:35 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:25.891 09:42:35 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4072153 00:04:25.891 09:42:35 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:25.891 09:42:35 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:25.891 [ 00:04:25.891 "bdev_malloc_delete", 00:04:25.891 "bdev_malloc_create", 00:04:25.891 "bdev_null_resize", 00:04:25.891 "bdev_null_delete", 00:04:25.891 "bdev_null_create", 00:04:25.891 "bdev_nvme_cuse_unregister", 00:04:25.891 "bdev_nvme_cuse_register", 00:04:25.891 "bdev_opal_new_user", 00:04:25.891 "bdev_opal_set_lock_state", 00:04:25.891 "bdev_opal_delete", 00:04:25.891 "bdev_opal_get_info", 00:04:25.891 "bdev_opal_create", 00:04:25.891 "bdev_nvme_opal_revert", 00:04:25.891 "bdev_nvme_opal_init", 00:04:25.891 "bdev_nvme_send_cmd", 00:04:25.891 "bdev_nvme_set_keys", 00:04:25.891 "bdev_nvme_get_path_iostat", 00:04:25.891 "bdev_nvme_get_mdns_discovery_info", 00:04:25.891 "bdev_nvme_stop_mdns_discovery", 00:04:25.891 "bdev_nvme_start_mdns_discovery", 00:04:25.891 "bdev_nvme_set_multipath_policy", 00:04:25.891 "bdev_nvme_set_preferred_path", 00:04:25.891 "bdev_nvme_get_io_paths", 00:04:25.891 "bdev_nvme_remove_error_injection", 00:04:25.891 "bdev_nvme_add_error_injection", 00:04:25.891 "bdev_nvme_get_discovery_info", 00:04:25.891 "bdev_nvme_stop_discovery", 00:04:25.891 "bdev_nvme_start_discovery", 00:04:25.891 "bdev_nvme_get_controller_health_info", 00:04:25.891 "bdev_nvme_disable_controller", 00:04:25.891 "bdev_nvme_enable_controller", 00:04:25.891 "bdev_nvme_reset_controller", 00:04:25.891 "bdev_nvme_get_transport_statistics", 00:04:25.891 "bdev_nvme_apply_firmware", 00:04:25.891 "bdev_nvme_detach_controller", 00:04:25.891 "bdev_nvme_get_controllers", 00:04:25.891 "bdev_nvme_attach_controller", 00:04:25.891 "bdev_nvme_set_hotplug", 00:04:25.891 "bdev_nvme_set_options", 00:04:25.891 "bdev_passthru_delete", 00:04:25.891 "bdev_passthru_create", 00:04:25.891 "bdev_lvol_set_parent_bdev", 00:04:25.891 "bdev_lvol_set_parent", 00:04:25.891 "bdev_lvol_check_shallow_copy", 00:04:25.891 "bdev_lvol_start_shallow_copy", 00:04:25.891 "bdev_lvol_grow_lvstore", 00:04:25.891 "bdev_lvol_get_lvols", 00:04:25.891 "bdev_lvol_get_lvstores", 00:04:25.891 "bdev_lvol_delete", 00:04:25.891 "bdev_lvol_set_read_only", 00:04:25.891 "bdev_lvol_resize", 00:04:25.891 "bdev_lvol_decouple_parent", 00:04:25.891 "bdev_lvol_inflate", 00:04:25.891 "bdev_lvol_rename", 00:04:25.891 "bdev_lvol_clone_bdev", 00:04:25.891 "bdev_lvol_clone", 00:04:25.891 "bdev_lvol_snapshot", 00:04:25.891 "bdev_lvol_create", 00:04:25.891 "bdev_lvol_delete_lvstore", 00:04:25.891 "bdev_lvol_rename_lvstore", 00:04:25.891 "bdev_lvol_create_lvstore", 00:04:25.891 "bdev_raid_set_options", 00:04:25.891 "bdev_raid_remove_base_bdev", 00:04:25.891 "bdev_raid_add_base_bdev", 00:04:25.891 "bdev_raid_delete", 00:04:25.891 "bdev_raid_create", 00:04:25.891 "bdev_raid_get_bdevs", 00:04:25.891 "bdev_error_inject_error", 00:04:25.891 "bdev_error_delete", 00:04:25.891 "bdev_error_create", 00:04:25.891 "bdev_split_delete", 00:04:25.891 "bdev_split_create", 00:04:25.891 "bdev_delay_delete", 00:04:25.891 "bdev_delay_create", 00:04:25.891 "bdev_delay_update_latency", 00:04:25.891 "bdev_zone_block_delete", 00:04:25.891 "bdev_zone_block_create", 00:04:25.891 "blobfs_create", 00:04:25.891 "blobfs_detect", 00:04:25.891 "blobfs_set_cache_size", 00:04:25.891 "bdev_aio_delete", 00:04:25.891 "bdev_aio_rescan", 00:04:25.891 "bdev_aio_create", 00:04:25.891 "bdev_ftl_set_property", 00:04:25.891 "bdev_ftl_get_properties", 00:04:25.891 "bdev_ftl_get_stats", 00:04:25.891 "bdev_ftl_unmap", 00:04:25.891 "bdev_ftl_unload", 00:04:25.891 "bdev_ftl_delete", 00:04:25.891 "bdev_ftl_load", 00:04:25.891 "bdev_ftl_create", 00:04:25.891 "bdev_virtio_attach_controller", 00:04:25.891 "bdev_virtio_scsi_get_devices", 00:04:25.891 "bdev_virtio_detach_controller", 00:04:25.891 "bdev_virtio_blk_set_hotplug", 00:04:25.891 "bdev_iscsi_delete", 00:04:25.891 "bdev_iscsi_create", 00:04:25.891 "bdev_iscsi_set_options", 00:04:25.891 "accel_error_inject_error", 00:04:25.891 "ioat_scan_accel_module", 00:04:25.891 "dsa_scan_accel_module", 00:04:25.891 "iaa_scan_accel_module", 00:04:25.891 "vfu_virtio_create_fs_endpoint", 00:04:25.891 "vfu_virtio_create_scsi_endpoint", 00:04:25.891 "vfu_virtio_scsi_remove_target", 00:04:25.891 "vfu_virtio_scsi_add_target", 00:04:25.891 "vfu_virtio_create_blk_endpoint", 00:04:25.891 "vfu_virtio_delete_endpoint", 00:04:25.891 "keyring_file_remove_key", 00:04:25.891 "keyring_file_add_key", 00:04:25.891 "keyring_linux_set_options", 00:04:25.891 "fsdev_aio_delete", 00:04:25.891 "fsdev_aio_create", 00:04:25.891 "iscsi_get_histogram", 00:04:25.891 "iscsi_enable_histogram", 00:04:25.891 "iscsi_set_options", 00:04:25.891 "iscsi_get_auth_groups", 00:04:25.891 "iscsi_auth_group_remove_secret", 00:04:25.891 "iscsi_auth_group_add_secret", 00:04:25.891 "iscsi_delete_auth_group", 00:04:25.891 "iscsi_create_auth_group", 00:04:25.891 "iscsi_set_discovery_auth", 00:04:25.891 "iscsi_get_options", 00:04:25.891 "iscsi_target_node_request_logout", 00:04:25.891 "iscsi_target_node_set_redirect", 00:04:25.891 "iscsi_target_node_set_auth", 00:04:25.891 "iscsi_target_node_add_lun", 00:04:25.891 "iscsi_get_stats", 00:04:25.891 "iscsi_get_connections", 00:04:25.891 "iscsi_portal_group_set_auth", 00:04:25.891 "iscsi_start_portal_group", 00:04:25.891 "iscsi_delete_portal_group", 00:04:25.891 "iscsi_create_portal_group", 00:04:25.892 "iscsi_get_portal_groups", 00:04:25.892 "iscsi_delete_target_node", 00:04:25.892 "iscsi_target_node_remove_pg_ig_maps", 00:04:25.892 "iscsi_target_node_add_pg_ig_maps", 00:04:25.892 "iscsi_create_target_node", 00:04:25.892 "iscsi_get_target_nodes", 00:04:25.892 "iscsi_delete_initiator_group", 00:04:25.892 "iscsi_initiator_group_remove_initiators", 00:04:25.892 "iscsi_initiator_group_add_initiators", 00:04:25.892 "iscsi_create_initiator_group", 00:04:25.892 "iscsi_get_initiator_groups", 00:04:25.892 "nvmf_set_crdt", 00:04:25.892 "nvmf_set_config", 00:04:25.892 "nvmf_set_max_subsystems", 00:04:25.892 "nvmf_stop_mdns_prr", 00:04:25.892 "nvmf_publish_mdns_prr", 00:04:25.892 "nvmf_subsystem_get_listeners", 00:04:25.892 "nvmf_subsystem_get_qpairs", 00:04:25.892 "nvmf_subsystem_get_controllers", 00:04:25.892 "nvmf_get_stats", 00:04:25.892 "nvmf_get_transports", 00:04:25.892 "nvmf_create_transport", 00:04:25.892 "nvmf_get_targets", 00:04:25.892 "nvmf_delete_target", 00:04:25.892 "nvmf_create_target", 00:04:25.892 "nvmf_subsystem_allow_any_host", 00:04:25.892 "nvmf_subsystem_set_keys", 00:04:25.892 "nvmf_subsystem_remove_host", 00:04:25.892 "nvmf_subsystem_add_host", 00:04:25.892 "nvmf_ns_remove_host", 00:04:25.892 "nvmf_ns_add_host", 00:04:25.892 "nvmf_subsystem_remove_ns", 00:04:25.892 "nvmf_subsystem_set_ns_ana_group", 00:04:25.892 "nvmf_subsystem_add_ns", 00:04:25.892 "nvmf_subsystem_listener_set_ana_state", 00:04:25.892 "nvmf_discovery_get_referrals", 00:04:25.892 "nvmf_discovery_remove_referral", 00:04:25.892 "nvmf_discovery_add_referral", 00:04:25.892 "nvmf_subsystem_remove_listener", 00:04:25.892 "nvmf_subsystem_add_listener", 00:04:25.892 "nvmf_delete_subsystem", 00:04:25.892 "nvmf_create_subsystem", 00:04:25.892 "nvmf_get_subsystems", 00:04:25.892 "env_dpdk_get_mem_stats", 00:04:25.892 "nbd_get_disks", 00:04:25.892 "nbd_stop_disk", 00:04:25.892 "nbd_start_disk", 00:04:25.892 "ublk_recover_disk", 00:04:25.892 "ublk_get_disks", 00:04:25.892 "ublk_stop_disk", 00:04:25.892 "ublk_start_disk", 00:04:25.892 "ublk_destroy_target", 00:04:25.892 "ublk_create_target", 00:04:25.892 "virtio_blk_create_transport", 00:04:25.892 "virtio_blk_get_transports", 00:04:25.892 "vhost_controller_set_coalescing", 00:04:25.892 "vhost_get_controllers", 00:04:25.892 "vhost_delete_controller", 00:04:25.892 "vhost_create_blk_controller", 00:04:25.892 "vhost_scsi_controller_remove_target", 00:04:25.892 "vhost_scsi_controller_add_target", 00:04:25.892 "vhost_start_scsi_controller", 00:04:25.892 "vhost_create_scsi_controller", 00:04:25.892 "thread_set_cpumask", 00:04:25.892 "scheduler_set_options", 00:04:25.892 "framework_get_governor", 00:04:25.892 "framework_get_scheduler", 00:04:25.892 "framework_set_scheduler", 00:04:25.892 "framework_get_reactors", 00:04:25.892 "thread_get_io_channels", 00:04:25.892 "thread_get_pollers", 00:04:25.892 "thread_get_stats", 00:04:25.892 "framework_monitor_context_switch", 00:04:25.892 "spdk_kill_instance", 00:04:25.892 "log_enable_timestamps", 00:04:25.892 "log_get_flags", 00:04:25.892 "log_clear_flag", 00:04:25.892 "log_set_flag", 00:04:25.892 "log_get_level", 00:04:25.892 "log_set_level", 00:04:25.892 "log_get_print_level", 00:04:25.892 "log_set_print_level", 00:04:25.892 "framework_enable_cpumask_locks", 00:04:25.892 "framework_disable_cpumask_locks", 00:04:25.892 "framework_wait_init", 00:04:25.892 "framework_start_init", 00:04:25.892 "scsi_get_devices", 00:04:25.892 "bdev_get_histogram", 00:04:25.892 "bdev_enable_histogram", 00:04:25.892 "bdev_set_qos_limit", 00:04:25.892 "bdev_set_qd_sampling_period", 00:04:25.892 "bdev_get_bdevs", 00:04:25.892 "bdev_reset_iostat", 00:04:25.892 "bdev_get_iostat", 00:04:25.892 "bdev_examine", 00:04:25.892 "bdev_wait_for_examine", 00:04:25.892 "bdev_set_options", 00:04:25.892 "accel_get_stats", 00:04:25.892 "accel_set_options", 00:04:25.892 "accel_set_driver", 00:04:25.892 "accel_crypto_key_destroy", 00:04:25.892 "accel_crypto_keys_get", 00:04:25.892 "accel_crypto_key_create", 00:04:25.892 "accel_assign_opc", 00:04:25.892 "accel_get_module_info", 00:04:25.892 "accel_get_opc_assignments", 00:04:25.892 "vmd_rescan", 00:04:25.892 "vmd_remove_device", 00:04:25.892 "vmd_enable", 00:04:25.892 "sock_get_default_impl", 00:04:25.892 "sock_set_default_impl", 00:04:25.892 "sock_impl_set_options", 00:04:25.892 "sock_impl_get_options", 00:04:25.892 "iobuf_get_stats", 00:04:25.892 "iobuf_set_options", 00:04:25.892 "keyring_get_keys", 00:04:25.892 "vfu_tgt_set_base_path", 00:04:25.892 "framework_get_pci_devices", 00:04:25.892 "framework_get_config", 00:04:25.892 "framework_get_subsystems", 00:04:25.892 "fsdev_set_opts", 00:04:25.892 "fsdev_get_opts", 00:04:25.892 "trace_get_info", 00:04:25.892 "trace_get_tpoint_group_mask", 00:04:25.892 "trace_disable_tpoint_group", 00:04:25.892 "trace_enable_tpoint_group", 00:04:25.892 "trace_clear_tpoint_mask", 00:04:25.892 "trace_set_tpoint_mask", 00:04:25.892 "notify_get_notifications", 00:04:25.892 "notify_get_types", 00:04:25.892 "spdk_get_version", 00:04:25.892 "rpc_get_methods" 00:04:25.892 ] 00:04:25.892 09:42:35 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:25.892 09:42:35 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.892 09:42:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:25.892 09:42:35 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:25.892 09:42:35 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4072073 00:04:25.892 09:42:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 4072073 ']' 00:04:25.892 09:42:35 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 4072073 00:04:25.892 09:42:35 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:25.892 09:42:35 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.892 09:42:35 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4072073 00:04:26.151 09:42:35 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.151 09:42:35 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.151 09:42:35 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4072073' 00:04:26.151 killing process with pid 4072073 00:04:26.151 09:42:35 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 4072073 00:04:26.151 09:42:35 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 4072073 00:04:26.411 00:04:26.411 real 0m1.637s 00:04:26.411 user 0m3.018s 00:04:26.411 sys 0m0.498s 00:04:26.411 09:42:35 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.411 09:42:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:26.411 ************************************ 00:04:26.411 END TEST spdkcli_tcp 00:04:26.411 ************************************ 00:04:26.411 09:42:35 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:26.411 09:42:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.411 09:42:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.411 09:42:35 -- common/autotest_common.sh@10 -- # set +x 00:04:26.411 ************************************ 00:04:26.411 START TEST dpdk_mem_utility 00:04:26.411 ************************************ 00:04:26.411 09:42:35 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:26.411 * Looking for test storage... 00:04:26.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:26.411 09:42:35 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:26.411 09:42:35 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:26.411 09:42:35 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:26.671 09:42:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.671 09:42:36 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:26.671 09:42:36 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.671 09:42:36 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:26.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.671 --rc genhtml_branch_coverage=1 00:04:26.671 --rc genhtml_function_coverage=1 00:04:26.671 --rc genhtml_legend=1 00:04:26.671 --rc geninfo_all_blocks=1 00:04:26.671 --rc geninfo_unexecuted_blocks=1 00:04:26.671 00:04:26.671 ' 00:04:26.671 09:42:36 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:26.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.671 --rc genhtml_branch_coverage=1 00:04:26.671 --rc genhtml_function_coverage=1 00:04:26.671 --rc genhtml_legend=1 00:04:26.671 --rc geninfo_all_blocks=1 00:04:26.671 --rc geninfo_unexecuted_blocks=1 00:04:26.671 00:04:26.671 ' 00:04:26.671 09:42:36 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:26.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.671 --rc genhtml_branch_coverage=1 00:04:26.671 --rc genhtml_function_coverage=1 00:04:26.671 --rc genhtml_legend=1 00:04:26.671 --rc geninfo_all_blocks=1 00:04:26.671 --rc geninfo_unexecuted_blocks=1 00:04:26.671 00:04:26.671 ' 00:04:26.671 09:42:36 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:26.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.671 --rc genhtml_branch_coverage=1 00:04:26.671 --rc genhtml_function_coverage=1 00:04:26.671 --rc genhtml_legend=1 00:04:26.671 --rc geninfo_all_blocks=1 00:04:26.671 --rc geninfo_unexecuted_blocks=1 00:04:26.671 00:04:26.671 ' 00:04:26.671 09:42:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:26.671 09:42:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4072447 00:04:26.671 09:42:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4072447 00:04:26.671 09:42:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.671 09:42:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 4072447 ']' 00:04:26.671 09:42:36 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.671 09:42:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.671 09:42:36 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.671 09:42:36 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.671 09:42:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:26.671 [2024-12-11 09:42:36.071172] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:26.671 [2024-12-11 09:42:36.071233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4072447 ] 00:04:26.671 [2024-12-11 09:42:36.151597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.671 [2024-12-11 09:42:36.192140] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.611 09:42:36 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.611 09:42:36 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:27.611 09:42:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:27.611 09:42:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:27.611 09:42:36 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.611 09:42:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:27.611 { 00:04:27.611 "filename": "/tmp/spdk_mem_dump.txt" 00:04:27.611 } 00:04:27.611 09:42:36 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.611 09:42:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:27.611 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:27.611 1 heaps totaling size 818.000000 MiB 00:04:27.611 size: 818.000000 MiB heap id: 0 00:04:27.611 end heaps---------- 00:04:27.611 9 mempools totaling size 603.782043 MiB 00:04:27.611 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:27.611 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:27.611 size: 100.555481 MiB name: bdev_io_4072447 00:04:27.611 size: 50.003479 MiB name: msgpool_4072447 00:04:27.611 size: 36.509338 MiB name: fsdev_io_4072447 00:04:27.611 size: 21.763794 MiB name: PDU_Pool 00:04:27.611 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:27.611 size: 4.133484 MiB name: evtpool_4072447 00:04:27.611 size: 0.026123 MiB name: Session_Pool 00:04:27.611 end mempools------- 00:04:27.611 6 memzones totaling size 4.142822 MiB 00:04:27.611 size: 1.000366 MiB name: RG_ring_0_4072447 00:04:27.611 size: 1.000366 MiB name: RG_ring_1_4072447 00:04:27.611 size: 1.000366 MiB name: RG_ring_4_4072447 00:04:27.611 size: 1.000366 MiB name: RG_ring_5_4072447 00:04:27.611 size: 0.125366 MiB name: RG_ring_2_4072447 00:04:27.611 size: 0.015991 MiB name: RG_ring_3_4072447 00:04:27.611 end memzones------- 00:04:27.611 09:42:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:27.611 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:27.611 list of free elements. size: 10.852478 MiB 00:04:27.611 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:27.611 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:27.611 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:27.611 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:27.611 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:27.611 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:27.611 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:27.611 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:27.611 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:27.611 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:27.611 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:27.611 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:27.611 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:27.611 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:27.611 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:27.611 list of standard malloc elements. size: 199.218628 MiB 00:04:27.611 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:27.611 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:27.611 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:27.611 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:27.611 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:27.611 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:27.611 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:27.611 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:27.611 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:27.611 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:27.611 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:27.611 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:27.611 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:27.611 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:27.611 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:27.612 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:27.612 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:27.612 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:27.612 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:27.612 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:27.612 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:27.612 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:27.612 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:27.612 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:27.612 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:27.612 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:27.612 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:27.612 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:27.612 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:27.612 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:27.612 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:27.612 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:27.612 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:27.612 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:27.612 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:27.612 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:27.612 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:27.612 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:27.612 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:27.612 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:27.612 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:27.612 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:27.612 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:27.612 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:27.612 list of memzone associated elements. size: 607.928894 MiB 00:04:27.612 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:27.612 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:27.612 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:27.612 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:27.612 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:27.612 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_4072447_0 00:04:27.612 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:27.612 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4072447_0 00:04:27.612 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:27.612 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_4072447_0 00:04:27.612 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:27.612 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:27.612 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:27.612 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:27.612 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:27.612 associated memzone info: size: 3.000122 MiB name: MP_evtpool_4072447_0 00:04:27.612 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:27.612 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4072447 00:04:27.612 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:27.612 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4072447 00:04:27.612 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:27.612 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:27.612 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:27.612 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:27.612 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:27.612 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:27.612 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:27.612 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:27.612 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:27.612 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4072447 00:04:27.612 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:27.612 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4072447 00:04:27.612 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:27.612 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4072447 00:04:27.612 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:27.612 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4072447 00:04:27.612 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:27.612 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_4072447 00:04:27.612 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:27.612 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4072447 00:04:27.612 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:27.612 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:27.612 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:27.612 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:27.612 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:27.612 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:27.612 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:27.612 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_4072447 00:04:27.612 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:27.612 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4072447 00:04:27.612 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:27.612 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:27.612 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:27.612 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:27.612 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:27.612 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4072447 00:04:27.612 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:27.612 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:27.612 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:27.612 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4072447 00:04:27.612 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:27.612 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_4072447 00:04:27.612 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:27.612 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4072447 00:04:27.612 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:27.612 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:27.612 09:42:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:27.612 09:42:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4072447 00:04:27.612 09:42:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 4072447 ']' 00:04:27.612 09:42:37 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 4072447 00:04:27.612 09:42:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:27.612 09:42:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.612 09:42:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4072447 00:04:27.612 09:42:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.612 09:42:37 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.612 09:42:37 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4072447' 00:04:27.612 killing process with pid 4072447 00:04:27.612 09:42:37 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 4072447 00:04:27.612 09:42:37 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 4072447 00:04:27.872 00:04:27.872 real 0m1.508s 00:04:27.872 user 0m1.580s 00:04:27.872 sys 0m0.443s 00:04:27.872 09:42:37 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.872 09:42:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:27.872 ************************************ 00:04:27.872 END TEST dpdk_mem_utility 00:04:27.872 ************************************ 00:04:27.872 09:42:37 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:27.872 09:42:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.872 09:42:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.872 09:42:37 -- common/autotest_common.sh@10 -- # set +x 00:04:27.872 ************************************ 00:04:27.872 START TEST event 00:04:27.872 ************************************ 00:04:27.872 09:42:37 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:28.131 * Looking for test storage... 00:04:28.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:28.131 09:42:37 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:28.131 09:42:37 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:28.131 09:42:37 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:28.131 09:42:37 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:28.131 09:42:37 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.131 09:42:37 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.131 09:42:37 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.131 09:42:37 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.131 09:42:37 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.131 09:42:37 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.131 09:42:37 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.131 09:42:37 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.131 09:42:37 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.131 09:42:37 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.131 09:42:37 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.131 09:42:37 event -- scripts/common.sh@344 -- # case "$op" in 00:04:28.131 09:42:37 event -- scripts/common.sh@345 -- # : 1 00:04:28.131 09:42:37 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.131 09:42:37 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.131 09:42:37 event -- scripts/common.sh@365 -- # decimal 1 00:04:28.131 09:42:37 event -- scripts/common.sh@353 -- # local d=1 00:04:28.131 09:42:37 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.131 09:42:37 event -- scripts/common.sh@355 -- # echo 1 00:04:28.132 09:42:37 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.132 09:42:37 event -- scripts/common.sh@366 -- # decimal 2 00:04:28.132 09:42:37 event -- scripts/common.sh@353 -- # local d=2 00:04:28.132 09:42:37 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.132 09:42:37 event -- scripts/common.sh@355 -- # echo 2 00:04:28.132 09:42:37 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.132 09:42:37 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.132 09:42:37 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.132 09:42:37 event -- scripts/common.sh@368 -- # return 0 00:04:28.132 09:42:37 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.132 09:42:37 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:28.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.132 --rc genhtml_branch_coverage=1 00:04:28.132 --rc genhtml_function_coverage=1 00:04:28.132 --rc genhtml_legend=1 00:04:28.132 --rc geninfo_all_blocks=1 00:04:28.132 --rc geninfo_unexecuted_blocks=1 00:04:28.132 00:04:28.132 ' 00:04:28.132 09:42:37 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:28.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.132 --rc genhtml_branch_coverage=1 00:04:28.132 --rc genhtml_function_coverage=1 00:04:28.132 --rc genhtml_legend=1 00:04:28.132 --rc geninfo_all_blocks=1 00:04:28.132 --rc geninfo_unexecuted_blocks=1 00:04:28.132 00:04:28.132 ' 00:04:28.132 09:42:37 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:28.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.132 --rc genhtml_branch_coverage=1 00:04:28.132 --rc genhtml_function_coverage=1 00:04:28.132 --rc genhtml_legend=1 00:04:28.132 --rc geninfo_all_blocks=1 00:04:28.132 --rc geninfo_unexecuted_blocks=1 00:04:28.132 00:04:28.132 ' 00:04:28.132 09:42:37 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:28.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.132 --rc genhtml_branch_coverage=1 00:04:28.132 --rc genhtml_function_coverage=1 00:04:28.132 --rc genhtml_legend=1 00:04:28.132 --rc geninfo_all_blocks=1 00:04:28.132 --rc geninfo_unexecuted_blocks=1 00:04:28.132 00:04:28.132 ' 00:04:28.132 09:42:37 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:28.132 09:42:37 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:28.132 09:42:37 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:28.132 09:42:37 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:28.132 09:42:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.132 09:42:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.132 ************************************ 00:04:28.132 START TEST event_perf 00:04:28.132 ************************************ 00:04:28.132 09:42:37 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:28.132 Running I/O for 1 seconds...[2024-12-11 09:42:37.637183] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:28.132 [2024-12-11 09:42:37.637260] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4072739 ] 00:04:28.391 [2024-12-11 09:42:37.719162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:28.391 [2024-12-11 09:42:37.760961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.391 [2024-12-11 09:42:37.761072] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:28.391 [2024-12-11 09:42:37.761178] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.391 [2024-12-11 09:42:37.761179] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:04:29.329 Running I/O for 1 seconds... 00:04:29.329 lcore 0: 202248 00:04:29.329 lcore 1: 202246 00:04:29.329 lcore 2: 202248 00:04:29.329 lcore 3: 202247 00:04:29.329 done. 00:04:29.329 00:04:29.329 real 0m1.183s 00:04:29.329 user 0m4.101s 00:04:29.329 sys 0m0.079s 00:04:29.329 09:42:38 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.329 09:42:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:29.329 ************************************ 00:04:29.329 END TEST event_perf 00:04:29.329 ************************************ 00:04:29.329 09:42:38 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:29.329 09:42:38 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:29.329 09:42:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.329 09:42:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.329 ************************************ 00:04:29.329 START TEST event_reactor 00:04:29.329 ************************************ 00:04:29.329 09:42:38 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:29.329 [2024-12-11 09:42:38.890378] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:29.329 [2024-12-11 09:42:38.890447] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4072994 ] 00:04:29.589 [2024-12-11 09:42:38.971787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.589 [2024-12-11 09:42:39.009684] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.526 test_start 00:04:30.526 oneshot 00:04:30.526 tick 100 00:04:30.526 tick 100 00:04:30.526 tick 250 00:04:30.526 tick 100 00:04:30.526 tick 100 00:04:30.526 tick 100 00:04:30.526 tick 250 00:04:30.526 tick 500 00:04:30.526 tick 100 00:04:30.526 tick 100 00:04:30.526 tick 250 00:04:30.526 tick 100 00:04:30.526 tick 100 00:04:30.526 test_end 00:04:30.526 00:04:30.526 real 0m1.182s 00:04:30.526 user 0m1.103s 00:04:30.526 sys 0m0.075s 00:04:30.526 09:42:40 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.526 09:42:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:30.526 ************************************ 00:04:30.526 END TEST event_reactor 00:04:30.526 ************************************ 00:04:30.526 09:42:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:30.526 09:42:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:30.526 09:42:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.526 09:42:40 event -- common/autotest_common.sh@10 -- # set +x 00:04:30.786 ************************************ 00:04:30.786 START TEST event_reactor_perf 00:04:30.786 ************************************ 00:04:30.786 09:42:40 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:30.786 [2024-12-11 09:42:40.146013] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:30.786 [2024-12-11 09:42:40.146070] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4073238 ] 00:04:30.786 [2024-12-11 09:42:40.228829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.786 [2024-12-11 09:42:40.269023] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.166 test_start 00:04:32.166 test_end 00:04:32.166 Performance: 512903 events per second 00:04:32.166 00:04:32.166 real 0m1.183s 00:04:32.166 user 0m1.105s 00:04:32.166 sys 0m0.074s 00:04:32.166 09:42:41 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.166 09:42:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:32.166 ************************************ 00:04:32.166 END TEST event_reactor_perf 00:04:32.166 ************************************ 00:04:32.166 09:42:41 event -- event/event.sh@49 -- # uname -s 00:04:32.166 09:42:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:32.166 09:42:41 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:32.166 09:42:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.166 09:42:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.166 09:42:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.166 ************************************ 00:04:32.166 START TEST event_scheduler 00:04:32.166 ************************************ 00:04:32.166 09:42:41 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:32.166 * Looking for test storage... 00:04:32.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:32.166 09:42:41 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:32.166 09:42:41 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:32.166 09:42:41 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:32.166 09:42:41 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:32.166 09:42:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:32.167 09:42:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.167 09:42:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:32.167 09:42:41 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.167 09:42:41 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.167 09:42:41 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.167 09:42:41 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:32.167 09:42:41 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.167 09:42:41 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:32.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.167 --rc genhtml_branch_coverage=1 00:04:32.167 --rc genhtml_function_coverage=1 00:04:32.167 --rc genhtml_legend=1 00:04:32.167 --rc geninfo_all_blocks=1 00:04:32.167 --rc geninfo_unexecuted_blocks=1 00:04:32.167 00:04:32.167 ' 00:04:32.167 09:42:41 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:32.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.167 --rc genhtml_branch_coverage=1 00:04:32.167 --rc genhtml_function_coverage=1 00:04:32.167 --rc genhtml_legend=1 00:04:32.167 --rc geninfo_all_blocks=1 00:04:32.167 --rc geninfo_unexecuted_blocks=1 00:04:32.167 00:04:32.167 ' 00:04:32.167 09:42:41 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:32.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.167 --rc genhtml_branch_coverage=1 00:04:32.167 --rc genhtml_function_coverage=1 00:04:32.167 --rc genhtml_legend=1 00:04:32.167 --rc geninfo_all_blocks=1 00:04:32.167 --rc geninfo_unexecuted_blocks=1 00:04:32.167 00:04:32.167 ' 00:04:32.167 09:42:41 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:32.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.167 --rc genhtml_branch_coverage=1 00:04:32.167 --rc genhtml_function_coverage=1 00:04:32.167 --rc genhtml_legend=1 00:04:32.167 --rc geninfo_all_blocks=1 00:04:32.167 --rc geninfo_unexecuted_blocks=1 00:04:32.167 00:04:32.167 ' 00:04:32.167 09:42:41 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:32.167 09:42:41 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4073517 00:04:32.167 09:42:41 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.167 09:42:41 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:32.167 09:42:41 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4073517 00:04:32.167 09:42:41 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 4073517 ']' 00:04:32.167 09:42:41 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.167 09:42:41 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.167 09:42:41 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.167 09:42:41 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.167 09:42:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:32.167 [2024-12-11 09:42:41.603210] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:32.167 [2024-12-11 09:42:41.603259] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4073517 ] 00:04:32.167 [2024-12-11 09:42:41.683119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:32.167 [2024-12-11 09:42:41.724561] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.167 [2024-12-11 09:42:41.724674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.167 [2024-12-11 09:42:41.724783] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:32.167 [2024-12-11 09:42:41.724784] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:04:32.427 09:42:41 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.427 09:42:41 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:32.427 09:42:41 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:32.427 09:42:41 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.427 09:42:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:32.427 [2024-12-11 09:42:41.765272] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:32.427 [2024-12-11 09:42:41.765291] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:32.427 [2024-12-11 09:42:41.765300] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:32.427 [2024-12-11 09:42:41.765306] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:32.427 [2024-12-11 09:42:41.765311] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:32.427 09:42:41 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.427 09:42:41 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:32.427 09:42:41 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.427 09:42:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:32.427 [2024-12-11 09:42:41.843057] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:32.427 09:42:41 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.427 09:42:41 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:32.427 09:42:41 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.427 09:42:41 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.427 09:42:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:32.427 ************************************ 00:04:32.427 START TEST scheduler_create_thread 00:04:32.427 ************************************ 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.427 2 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.427 3 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.427 4 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.427 5 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.427 6 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.427 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.427 7 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.428 8 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.428 9 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.428 10 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.428 09:42:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.365 09:42:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.365 09:42:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:33.365 09:42:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.365 09:42:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.743 09:42:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.743 09:42:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:34.743 09:42:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:34.743 09:42:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.743 09:42:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.121 09:42:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.121 00:04:36.121 real 0m3.382s 00:04:36.121 user 0m0.026s 00:04:36.121 sys 0m0.005s 00:04:36.122 09:42:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.122 09:42:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.122 ************************************ 00:04:36.122 END TEST scheduler_create_thread 00:04:36.122 ************************************ 00:04:36.122 09:42:45 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:36.122 09:42:45 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4073517 00:04:36.122 09:42:45 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 4073517 ']' 00:04:36.122 09:42:45 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 4073517 00:04:36.122 09:42:45 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:36.122 09:42:45 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.122 09:42:45 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4073517 00:04:36.122 09:42:45 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:36.122 09:42:45 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:36.122 09:42:45 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4073517' 00:04:36.122 killing process with pid 4073517 00:04:36.122 09:42:45 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 4073517 00:04:36.122 09:42:45 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 4073517 00:04:36.122 [2024-12-11 09:42:45.639015] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:36.381 00:04:36.381 real 0m4.459s 00:04:36.381 user 0m7.788s 00:04:36.381 sys 0m0.377s 00:04:36.381 09:42:45 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.381 09:42:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:36.381 ************************************ 00:04:36.381 END TEST event_scheduler 00:04:36.381 ************************************ 00:04:36.381 09:42:45 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:36.381 09:42:45 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:36.381 09:42:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.381 09:42:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.381 09:42:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.381 ************************************ 00:04:36.381 START TEST app_repeat 00:04:36.381 ************************************ 00:04:36.381 09:42:45 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:36.381 09:42:45 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.381 09:42:45 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.381 09:42:45 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:36.381 09:42:45 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:36.381 09:42:45 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:36.381 09:42:45 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:36.381 09:42:45 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:36.381 09:42:45 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4074257 00:04:36.381 09:42:45 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.382 09:42:45 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:36.382 09:42:45 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4074257' 00:04:36.382 Process app_repeat pid: 4074257 00:04:36.382 09:42:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:36.382 09:42:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:36.382 spdk_app_start Round 0 00:04:36.382 09:42:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4074257 /var/tmp/spdk-nbd.sock 00:04:36.382 09:42:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4074257 ']' 00:04:36.382 09:42:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:36.382 09:42:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.382 09:42:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:36.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:36.382 09:42:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.382 09:42:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:36.382 [2024-12-11 09:42:45.953707] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:36.382 [2024-12-11 09:42:45.953761] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4074257 ] 00:04:36.641 [2024-12-11 09:42:46.034160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.641 [2024-12-11 09:42:46.073330] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.641 [2024-12-11 09:42:46.073330] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.641 09:42:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.641 09:42:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:36.641 09:42:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:36.900 Malloc0 00:04:36.900 09:42:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.159 Malloc1 00:04:37.159 09:42:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.159 09:42:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.159 09:42:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.159 09:42:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:37.159 09:42:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.159 09:42:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:37.159 09:42:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.159 09:42:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.159 09:42:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.159 09:42:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:37.159 09:42:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.159 09:42:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:37.159 09:42:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:37.159 09:42:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:37.159 09:42:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.159 09:42:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:37.418 /dev/nbd0 00:04:37.418 09:42:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:37.418 09:42:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:37.418 09:42:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:37.418 09:42:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:37.418 09:42:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:37.418 09:42:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:37.418 09:42:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:37.418 09:42:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:37.419 09:42:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:37.419 09:42:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:37.419 09:42:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:37.419 1+0 records in 00:04:37.419 1+0 records out 00:04:37.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229648 s, 17.8 MB/s 00:04:37.419 09:42:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.419 09:42:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:37.419 09:42:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.419 09:42:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:37.419 09:42:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:37.419 09:42:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:37.419 09:42:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.419 09:42:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:37.678 /dev/nbd1 00:04:37.678 09:42:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:37.678 09:42:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:37.678 09:42:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:37.678 09:42:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:37.678 09:42:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:37.678 09:42:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:37.678 09:42:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:37.678 09:42:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:37.678 09:42:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:37.678 09:42:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:37.678 09:42:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:37.678 1+0 records in 00:04:37.678 1+0 records out 00:04:37.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238345 s, 17.2 MB/s 00:04:37.678 09:42:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.678 09:42:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:37.678 09:42:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.678 09:42:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:37.678 09:42:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:37.678 09:42:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:37.678 09:42:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.678 09:42:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.678 09:42:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.678 09:42:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:37.937 09:42:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:37.937 { 00:04:37.937 "nbd_device": "/dev/nbd0", 00:04:37.937 "bdev_name": "Malloc0" 00:04:37.937 }, 00:04:37.937 { 00:04:37.937 "nbd_device": "/dev/nbd1", 00:04:37.937 "bdev_name": "Malloc1" 00:04:37.937 } 00:04:37.937 ]' 00:04:37.937 09:42:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:37.937 { 00:04:37.937 "nbd_device": "/dev/nbd0", 00:04:37.937 "bdev_name": "Malloc0" 00:04:37.937 }, 00:04:37.937 { 00:04:37.937 "nbd_device": "/dev/nbd1", 00:04:37.937 "bdev_name": "Malloc1" 00:04:37.937 } 00:04:37.937 ]' 00:04:37.937 09:42:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:37.937 09:42:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:37.937 /dev/nbd1' 00:04:37.937 09:42:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:37.937 09:42:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:37.937 /dev/nbd1' 00:04:37.937 09:42:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:37.937 09:42:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:37.937 09:42:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:37.937 09:42:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:37.937 09:42:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:37.937 09:42:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.937 09:42:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:37.937 09:42:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:37.937 09:42:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:37.938 256+0 records in 00:04:37.938 256+0 records out 00:04:37.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106403 s, 98.5 MB/s 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:37.938 256+0 records in 00:04:37.938 256+0 records out 00:04:37.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139605 s, 75.1 MB/s 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:37.938 256+0 records in 00:04:37.938 256+0 records out 00:04:37.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148877 s, 70.4 MB/s 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:37.938 09:42:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:38.197 09:42:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:38.197 09:42:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:38.197 09:42:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:38.197 09:42:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:38.197 09:42:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:38.197 09:42:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:38.197 09:42:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:38.197 09:42:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:38.197 09:42:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.197 09:42:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:38.457 09:42:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:38.457 09:42:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:38.457 09:42:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:38.457 09:42:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:38.457 09:42:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:38.457 09:42:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:38.457 09:42:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:38.457 09:42:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:38.457 09:42:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:38.457 09:42:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.457 09:42:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.717 09:42:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:38.717 09:42:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:38.717 09:42:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.717 09:42:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:38.717 09:42:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:38.717 09:42:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.717 09:42:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:38.717 09:42:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:38.717 09:42:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:38.717 09:42:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:38.717 09:42:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:38.717 09:42:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:38.717 09:42:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:38.976 09:42:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:38.976 [2024-12-11 09:42:48.453193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.976 [2024-12-11 09:42:48.488674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.976 [2024-12-11 09:42:48.488677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.976 [2024-12-11 09:42:48.528971] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:38.976 [2024-12-11 09:42:48.529021] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:42.266 09:42:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:42.266 09:42:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:42.266 spdk_app_start Round 1 00:04:42.266 09:42:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4074257 /var/tmp/spdk-nbd.sock 00:04:42.266 09:42:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4074257 ']' 00:04:42.266 09:42:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:42.266 09:42:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.266 09:42:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:42.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:42.266 09:42:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.266 09:42:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:42.266 09:42:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.266 09:42:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:42.266 09:42:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:42.266 Malloc0 00:04:42.266 09:42:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:42.525 Malloc1 00:04:42.525 09:42:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:42.525 09:42:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.525 09:42:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.525 09:42:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:42.525 09:42:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.525 09:42:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:42.525 09:42:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:42.525 09:42:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.525 09:42:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.525 09:42:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:42.525 09:42:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.525 09:42:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:42.525 09:42:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:42.525 09:42:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:42.525 09:42:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.525 09:42:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:42.784 /dev/nbd0 00:04:42.784 09:42:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:42.784 09:42:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:42.784 09:42:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:42.784 09:42:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:42.784 09:42:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:42.784 09:42:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:42.784 09:42:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:42.785 09:42:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:42.785 09:42:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:42.785 09:42:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:42.785 09:42:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.785 1+0 records in 00:04:42.785 1+0 records out 00:04:42.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190845 s, 21.5 MB/s 00:04:42.785 09:42:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.785 09:42:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:42.785 09:42:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.785 09:42:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:42.785 09:42:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:42.785 09:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.785 09:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.785 09:42:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:43.044 /dev/nbd1 00:04:43.044 09:42:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:43.044 09:42:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:43.044 09:42:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:43.044 09:42:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:43.044 09:42:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:43.044 09:42:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:43.044 09:42:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:43.044 09:42:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:43.044 09:42:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:43.044 09:42:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:43.044 09:42:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:43.044 1+0 records in 00:04:43.044 1+0 records out 00:04:43.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190541 s, 21.5 MB/s 00:04:43.044 09:42:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:43.044 09:42:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:43.044 09:42:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:43.044 09:42:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:43.044 09:42:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:43.044 09:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:43.044 09:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.044 09:42:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:43.044 09:42:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.044 09:42:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:43.303 { 00:04:43.303 "nbd_device": "/dev/nbd0", 00:04:43.303 "bdev_name": "Malloc0" 00:04:43.303 }, 00:04:43.303 { 00:04:43.303 "nbd_device": "/dev/nbd1", 00:04:43.303 "bdev_name": "Malloc1" 00:04:43.303 } 00:04:43.303 ]' 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:43.303 { 00:04:43.303 "nbd_device": "/dev/nbd0", 00:04:43.303 "bdev_name": "Malloc0" 00:04:43.303 }, 00:04:43.303 { 00:04:43.303 "nbd_device": "/dev/nbd1", 00:04:43.303 "bdev_name": "Malloc1" 00:04:43.303 } 00:04:43.303 ]' 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:43.303 /dev/nbd1' 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:43.303 /dev/nbd1' 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:43.303 256+0 records in 00:04:43.303 256+0 records out 00:04:43.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106922 s, 98.1 MB/s 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:43.303 256+0 records in 00:04:43.303 256+0 records out 00:04:43.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139082 s, 75.4 MB/s 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:43.303 256+0 records in 00:04:43.303 256+0 records out 00:04:43.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150928 s, 69.5 MB/s 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:43.303 09:42:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:43.562 09:42:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:43.562 09:42:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:43.562 09:42:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:43.562 09:42:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:43.562 09:42:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:43.562 09:42:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:43.562 09:42:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:43.562 09:42:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:43.562 09:42:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:43.562 09:42:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:43.822 09:42:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:43.822 09:42:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:43.822 09:42:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:43.822 09:42:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:43.822 09:42:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:43.822 09:42:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:43.822 09:42:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:43.822 09:42:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:43.822 09:42:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:43.822 09:42:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.822 09:42:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.081 09:42:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:44.081 09:42:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:44.081 09:42:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:44.081 09:42:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:44.081 09:42:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:44.081 09:42:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:44.081 09:42:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:44.081 09:42:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:44.081 09:42:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:44.081 09:42:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:44.081 09:42:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:44.081 09:42:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:44.081 09:42:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:44.340 09:42:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:44.340 [2024-12-11 09:42:53.812854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:44.340 [2024-12-11 09:42:53.848265] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.340 [2024-12-11 09:42:53.848269] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.340 [2024-12-11 09:42:53.889326] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:44.340 [2024-12-11 09:42:53.889366] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:47.626 09:42:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:47.626 09:42:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:47.626 spdk_app_start Round 2 00:04:47.626 09:42:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4074257 /var/tmp/spdk-nbd.sock 00:04:47.626 09:42:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4074257 ']' 00:04:47.626 09:42:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:47.626 09:42:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.626 09:42:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:47.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:47.626 09:42:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.626 09:42:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:47.626 09:42:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.626 09:42:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:47.626 09:42:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.626 Malloc0 00:04:47.626 09:42:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.885 Malloc1 00:04:47.885 09:42:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.885 09:42:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.885 09:42:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.885 09:42:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:47.885 09:42:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.885 09:42:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:47.885 09:42:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.885 09:42:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.885 09:42:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.885 09:42:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:47.885 09:42:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.885 09:42:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:47.885 09:42:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:47.885 09:42:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:47.885 09:42:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.885 09:42:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:48.144 /dev/nbd0 00:04:48.144 09:42:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:48.144 09:42:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:48.144 09:42:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:48.144 09:42:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:48.144 09:42:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:48.144 09:42:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:48.144 09:42:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:48.144 09:42:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:48.144 09:42:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:48.144 09:42:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:48.144 09:42:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.144 1+0 records in 00:04:48.144 1+0 records out 00:04:48.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245192 s, 16.7 MB/s 00:04:48.144 09:42:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.144 09:42:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:48.144 09:42:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.144 09:42:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:48.144 09:42:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:48.144 09:42:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.144 09:42:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.144 09:42:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:48.144 /dev/nbd1 00:04:48.403 09:42:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:48.403 09:42:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:48.403 09:42:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:48.403 09:42:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:48.403 09:42:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:48.403 09:42:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:48.403 09:42:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:48.403 09:42:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:48.403 09:42:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:48.403 09:42:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:48.403 09:42:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.403 1+0 records in 00:04:48.403 1+0 records out 00:04:48.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167577 s, 24.4 MB/s 00:04:48.403 09:42:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.403 09:42:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:48.403 09:42:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.403 09:42:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:48.403 09:42:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:48.403 09:42:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.403 09:42:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.403 09:42:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.403 09:42:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.403 09:42:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.403 09:42:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:48.403 { 00:04:48.403 "nbd_device": "/dev/nbd0", 00:04:48.403 "bdev_name": "Malloc0" 00:04:48.403 }, 00:04:48.403 { 00:04:48.403 "nbd_device": "/dev/nbd1", 00:04:48.403 "bdev_name": "Malloc1" 00:04:48.403 } 00:04:48.403 ]' 00:04:48.403 09:42:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:48.403 { 00:04:48.403 "nbd_device": "/dev/nbd0", 00:04:48.403 "bdev_name": "Malloc0" 00:04:48.403 }, 00:04:48.403 { 00:04:48.403 "nbd_device": "/dev/nbd1", 00:04:48.403 "bdev_name": "Malloc1" 00:04:48.403 } 00:04:48.403 ]' 00:04:48.403 09:42:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.663 09:42:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:48.663 /dev/nbd1' 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:48.663 /dev/nbd1' 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:48.663 256+0 records in 00:04:48.663 256+0 records out 00:04:48.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105805 s, 99.1 MB/s 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:48.663 256+0 records in 00:04:48.663 256+0 records out 00:04:48.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140048 s, 74.9 MB/s 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:48.663 256+0 records in 00:04:48.663 256+0 records out 00:04:48.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152801 s, 68.6 MB/s 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.663 09:42:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:48.922 09:42:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:48.922 09:42:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:48.922 09:42:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:48.922 09:42:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.922 09:42:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.922 09:42:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:48.922 09:42:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.922 09:42:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.922 09:42:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.922 09:42:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:49.181 09:42:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:49.440 09:42:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:49.440 09:42:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:49.698 [2024-12-11 09:42:59.141572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.698 [2024-12-11 09:42:59.176897] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.698 [2024-12-11 09:42:59.176898] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.698 [2024-12-11 09:42:59.217371] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:49.698 [2024-12-11 09:42:59.217411] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:52.992 09:43:02 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4074257 /var/tmp/spdk-nbd.sock 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4074257 ']' 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:52.992 09:43:02 event.app_repeat -- event/event.sh@39 -- # killprocess 4074257 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 4074257 ']' 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 4074257 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4074257 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4074257' 00:04:52.992 killing process with pid 4074257 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@973 -- # kill 4074257 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@978 -- # wait 4074257 00:04:52.992 spdk_app_start is called in Round 0. 00:04:52.992 Shutdown signal received, stop current app iteration 00:04:52.992 Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 reinitialization... 00:04:52.992 spdk_app_start is called in Round 1. 00:04:52.992 Shutdown signal received, stop current app iteration 00:04:52.992 Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 reinitialization... 00:04:52.992 spdk_app_start is called in Round 2. 00:04:52.992 Shutdown signal received, stop current app iteration 00:04:52.992 Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 reinitialization... 00:04:52.992 spdk_app_start is called in Round 3. 00:04:52.992 Shutdown signal received, stop current app iteration 00:04:52.992 09:43:02 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:52.992 09:43:02 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:52.992 00:04:52.992 real 0m16.472s 00:04:52.992 user 0m36.292s 00:04:52.992 sys 0m2.499s 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.992 09:43:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.992 ************************************ 00:04:52.992 END TEST app_repeat 00:04:52.992 ************************************ 00:04:52.992 09:43:02 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:52.993 09:43:02 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:52.993 09:43:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.993 09:43:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.993 09:43:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.993 ************************************ 00:04:52.993 START TEST cpu_locks 00:04:52.993 ************************************ 00:04:52.993 09:43:02 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:52.993 * Looking for test storage... 00:04:52.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:52.993 09:43:02 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:52.993 09:43:02 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:04:52.993 09:43:02 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.252 09:43:02 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.252 09:43:02 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:53.252 09:43:02 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.252 09:43:02 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.252 --rc genhtml_branch_coverage=1 00:04:53.252 --rc genhtml_function_coverage=1 00:04:53.252 --rc genhtml_legend=1 00:04:53.252 --rc geninfo_all_blocks=1 00:04:53.252 --rc geninfo_unexecuted_blocks=1 00:04:53.252 00:04:53.252 ' 00:04:53.252 09:43:02 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.252 --rc genhtml_branch_coverage=1 00:04:53.252 --rc genhtml_function_coverage=1 00:04:53.252 --rc genhtml_legend=1 00:04:53.252 --rc geninfo_all_blocks=1 00:04:53.252 --rc geninfo_unexecuted_blocks=1 00:04:53.252 00:04:53.252 ' 00:04:53.252 09:43:02 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.252 --rc genhtml_branch_coverage=1 00:04:53.252 --rc genhtml_function_coverage=1 00:04:53.252 --rc genhtml_legend=1 00:04:53.252 --rc geninfo_all_blocks=1 00:04:53.252 --rc geninfo_unexecuted_blocks=1 00:04:53.252 00:04:53.252 ' 00:04:53.252 09:43:02 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.252 --rc genhtml_branch_coverage=1 00:04:53.252 --rc genhtml_function_coverage=1 00:04:53.252 --rc genhtml_legend=1 00:04:53.252 --rc geninfo_all_blocks=1 00:04:53.252 --rc geninfo_unexecuted_blocks=1 00:04:53.252 00:04:53.252 ' 00:04:53.252 09:43:02 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:53.252 09:43:02 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:53.252 09:43:02 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:53.252 09:43:02 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:53.252 09:43:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.252 09:43:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.252 09:43:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.252 ************************************ 00:04:53.252 START TEST default_locks 00:04:53.252 ************************************ 00:04:53.252 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:53.252 09:43:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4077223 00:04:53.252 09:43:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4077223 00:04:53.252 09:43:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.252 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 4077223 ']' 00:04:53.252 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.252 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.252 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.252 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.252 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.252 [2024-12-11 09:43:02.724868] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:53.252 [2024-12-11 09:43:02.724910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4077223 ] 00:04:53.252 [2024-12-11 09:43:02.807486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.512 [2024-12-11 09:43:02.848662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.080 09:43:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.080 09:43:03 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:54.080 09:43:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4077223 00:04:54.080 09:43:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4077223 00:04:54.080 09:43:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:54.648 lslocks: write error 00:04:54.648 09:43:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4077223 00:04:54.648 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 4077223 ']' 00:04:54.648 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 4077223 00:04:54.648 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:54.648 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.648 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4077223 00:04:54.648 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.648 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.648 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4077223' 00:04:54.648 killing process with pid 4077223 00:04:54.648 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 4077223 00:04:54.648 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 4077223 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4077223 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4077223 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 4077223 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 4077223 ']' 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4077223) - No such process 00:04:54.908 ERROR: process (pid: 4077223) is no longer running 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:54.908 00:04:54.908 real 0m1.732s 00:04:54.908 user 0m1.835s 00:04:54.908 sys 0m0.591s 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.908 09:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.908 ************************************ 00:04:54.908 END TEST default_locks 00:04:54.908 ************************************ 00:04:54.908 09:43:04 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:54.908 09:43:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.908 09:43:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.908 09:43:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.908 ************************************ 00:04:54.908 START TEST default_locks_via_rpc 00:04:54.908 ************************************ 00:04:54.908 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:54.908 09:43:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4077693 00:04:54.908 09:43:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4077693 00:04:54.908 09:43:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.908 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4077693 ']' 00:04:54.908 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.908 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.908 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.908 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.908 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.168 [2024-12-11 09:43:04.529583] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:55.168 [2024-12-11 09:43:04.529629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4077693 ] 00:04:55.168 [2024-12-11 09:43:04.607674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.168 [2024-12-11 09:43:04.645074] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.427 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.427 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:55.427 09:43:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:55.427 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.427 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.427 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.427 09:43:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:55.427 09:43:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:55.427 09:43:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:55.427 09:43:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:55.427 09:43:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:55.427 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.427 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.427 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.427 09:43:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4077693 00:04:55.427 09:43:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4077693 00:04:55.427 09:43:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:55.686 09:43:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4077693 00:04:55.686 09:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 4077693 ']' 00:04:55.686 09:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 4077693 00:04:55.686 09:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:55.686 09:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.686 09:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4077693 00:04:55.687 09:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.687 09:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.687 09:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4077693' 00:04:55.687 killing process with pid 4077693 00:04:55.687 09:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 4077693 00:04:55.687 09:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 4077693 00:04:55.946 00:04:55.946 real 0m0.953s 00:04:55.946 user 0m0.893s 00:04:55.946 sys 0m0.457s 00:04:55.946 09:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.946 09:43:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.946 ************************************ 00:04:55.946 END TEST default_locks_via_rpc 00:04:55.946 ************************************ 00:04:55.946 09:43:05 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:55.946 09:43:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.946 09:43:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.946 09:43:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.946 ************************************ 00:04:55.946 START TEST non_locking_app_on_locked_coremask 00:04:55.946 ************************************ 00:04:55.946 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:55.946 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4077748 00:04:55.946 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4077748 /var/tmp/spdk.sock 00:04:55.946 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.946 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4077748 ']' 00:04:55.946 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.946 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.946 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.946 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.946 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.205 [2024-12-11 09:43:05.549989] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:56.205 [2024-12-11 09:43:05.550032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4077748 ] 00:04:56.205 [2024-12-11 09:43:05.629241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.205 [2024-12-11 09:43:05.668036] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.464 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.464 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:56.464 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4077949 00:04:56.464 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4077949 /var/tmp/spdk2.sock 00:04:56.464 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:56.464 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4077949 ']' 00:04:56.464 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:56.464 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.464 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:56.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:56.464 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.464 09:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.464 [2024-12-11 09:43:05.948531] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:56.464 [2024-12-11 09:43:05.948580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4077949 ] 00:04:56.722 [2024-12-11 09:43:06.041276] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:56.722 [2024-12-11 09:43:06.041311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.722 [2024-12-11 09:43:06.120106] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.290 09:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.290 09:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:57.290 09:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4077748 00:04:57.290 09:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:57.290 09:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4077748 00:04:57.859 lslocks: write error 00:04:57.859 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4077748 00:04:57.859 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4077748 ']' 00:04:57.859 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4077748 00:04:57.859 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:57.859 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.859 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4077748 00:04:57.859 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.859 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.859 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4077748' 00:04:57.859 killing process with pid 4077748 00:04:57.859 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4077748 00:04:57.859 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4077748 00:04:58.428 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4077949 00:04:58.428 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4077949 ']' 00:04:58.428 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4077949 00:04:58.428 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:58.428 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.428 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4077949 00:04:58.428 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.428 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.428 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4077949' 00:04:58.428 killing process with pid 4077949 00:04:58.428 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4077949 00:04:58.428 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4077949 00:04:58.997 00:04:58.997 real 0m2.777s 00:04:58.997 user 0m2.927s 00:04:58.997 sys 0m0.923s 00:04:58.997 09:43:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.997 09:43:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.997 ************************************ 00:04:58.997 END TEST non_locking_app_on_locked_coremask 00:04:58.997 ************************************ 00:04:58.997 09:43:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:58.997 09:43:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.997 09:43:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.997 09:43:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.997 ************************************ 00:04:58.997 START TEST locking_app_on_unlocked_coremask 00:04:58.997 ************************************ 00:04:58.997 09:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:58.997 09:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4078305 00:04:58.997 09:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4078305 /var/tmp/spdk.sock 00:04:58.997 09:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:58.997 09:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4078305 ']' 00:04:58.997 09:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.997 09:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.997 09:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.997 09:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.997 09:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.997 [2024-12-11 09:43:08.397309] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:58.997 [2024-12-11 09:43:08.397352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4078305 ] 00:04:58.997 [2024-12-11 09:43:08.477077] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:58.997 [2024-12-11 09:43:08.477104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.997 [2024-12-11 09:43:08.517225] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.935 09:43:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.935 09:43:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:59.935 09:43:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4078451 00:04:59.935 09:43:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4078451 /var/tmp/spdk2.sock 00:04:59.935 09:43:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:59.935 09:43:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4078451 ']' 00:04:59.935 09:43:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.935 09:43:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.935 09:43:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.935 09:43:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.935 09:43:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.935 [2024-12-11 09:43:09.272237] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:04:59.935 [2024-12-11 09:43:09.272285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4078451 ] 00:04:59.935 [2024-12-11 09:43:09.365189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.935 [2024-12-11 09:43:09.439817] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.873 09:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.873 09:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:00.873 09:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4078451 00:05:00.873 09:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.873 09:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4078451 00:05:01.132 lslocks: write error 00:05:01.132 09:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4078305 00:05:01.132 09:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4078305 ']' 00:05:01.132 09:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 4078305 00:05:01.132 09:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:01.132 09:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.132 09:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4078305 00:05:01.132 09:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.132 09:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.132 09:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4078305' 00:05:01.132 killing process with pid 4078305 00:05:01.132 09:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 4078305 00:05:01.132 09:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 4078305 00:05:01.729 09:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4078451 00:05:01.729 09:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4078451 ']' 00:05:01.729 09:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 4078451 00:05:01.729 09:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:01.729 09:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.729 09:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4078451 00:05:01.988 09:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.988 09:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.988 09:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4078451' 00:05:01.988 killing process with pid 4078451 00:05:01.988 09:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 4078451 00:05:01.988 09:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 4078451 00:05:02.247 00:05:02.247 real 0m3.262s 00:05:02.247 user 0m3.571s 00:05:02.247 sys 0m0.947s 00:05:02.247 09:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.247 09:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.247 ************************************ 00:05:02.247 END TEST locking_app_on_unlocked_coremask 00:05:02.247 ************************************ 00:05:02.247 09:43:11 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:02.247 09:43:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.247 09:43:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.247 09:43:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.247 ************************************ 00:05:02.247 START TEST locking_app_on_locked_coremask 00:05:02.248 ************************************ 00:05:02.248 09:43:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:02.248 09:43:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4078939 00:05:02.248 09:43:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4078939 /var/tmp/spdk.sock 00:05:02.248 09:43:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.248 09:43:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4078939 ']' 00:05:02.248 09:43:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.248 09:43:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.248 09:43:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.248 09:43:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.248 09:43:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.248 [2024-12-11 09:43:11.729068] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:05:02.248 [2024-12-11 09:43:11.729112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4078939 ] 00:05:02.248 [2024-12-11 09:43:11.808368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.507 [2024-12-11 09:43:11.847583] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.507 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.507 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:02.507 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4078947 00:05:02.507 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4078947 /var/tmp/spdk2.sock 00:05:02.507 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:02.507 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:02.507 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4078947 /var/tmp/spdk2.sock 00:05:02.507 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:02.507 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.507 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:02.507 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.507 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 4078947 /var/tmp/spdk2.sock 00:05:02.507 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4078947 ']' 00:05:02.507 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.507 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.507 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.507 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.507 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.766 [2024-12-11 09:43:12.111448] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:05:02.766 [2024-12-11 09:43:12.111490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4078947 ] 00:05:02.766 [2024-12-11 09:43:12.206071] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4078939 has claimed it. 00:05:02.766 [2024-12-11 09:43:12.206108] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:03.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4078947) - No such process 00:05:03.334 ERROR: process (pid: 4078947) is no longer running 00:05:03.334 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.334 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:03.334 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:03.334 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:03.334 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:03.334 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:03.334 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4078939 00:05:03.334 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4078939 00:05:03.334 09:43:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.958 lslocks: write error 00:05:03.958 09:43:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4078939 00:05:03.958 09:43:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4078939 ']' 00:05:03.958 09:43:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4078939 00:05:03.958 09:43:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:03.958 09:43:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.958 09:43:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4078939 00:05:03.958 09:43:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.958 09:43:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.958 09:43:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4078939' 00:05:03.958 killing process with pid 4078939 00:05:03.958 09:43:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4078939 00:05:03.958 09:43:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4078939 00:05:04.244 00:05:04.244 real 0m1.866s 00:05:04.244 user 0m2.001s 00:05:04.244 sys 0m0.645s 00:05:04.244 09:43:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.244 09:43:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.244 ************************************ 00:05:04.244 END TEST locking_app_on_locked_coremask 00:05:04.244 ************************************ 00:05:04.244 09:43:13 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:04.244 09:43:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.244 09:43:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.244 09:43:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.244 ************************************ 00:05:04.244 START TEST locking_overlapped_coremask 00:05:04.244 ************************************ 00:05:04.244 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:04.244 09:43:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4079208 00:05:04.244 09:43:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4079208 /var/tmp/spdk.sock 00:05:04.244 09:43:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:04.244 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 4079208 ']' 00:05:04.244 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.244 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.244 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.244 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.244 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.244 [2024-12-11 09:43:13.664698] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:05:04.244 [2024-12-11 09:43:13.664743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4079208 ] 00:05:04.244 [2024-12-11 09:43:13.724577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:04.244 [2024-12-11 09:43:13.765838] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.244 [2024-12-11 09:43:13.765945] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.244 [2024-12-11 09:43:13.765946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.553 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.553 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:04.553 09:43:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4079383 00:05:04.553 09:43:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4079383 /var/tmp/spdk2.sock 00:05:04.553 09:43:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:04.553 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:04.553 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4079383 /var/tmp/spdk2.sock 00:05:04.553 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:04.553 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.553 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:04.553 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.553 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 4079383 /var/tmp/spdk2.sock 00:05:04.553 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 4079383 ']' 00:05:04.553 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.553 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.553 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.553 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.553 09:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.553 [2024-12-11 09:43:14.038125] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:05:04.553 [2024-12-11 09:43:14.038178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4079383 ] 00:05:04.837 [2024-12-11 09:43:14.140332] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4079208 has claimed it. 00:05:04.837 [2024-12-11 09:43:14.140375] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:05.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4079383) - No such process 00:05:05.404 ERROR: process (pid: 4079383) is no longer running 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4079208 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 4079208 ']' 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 4079208 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4079208 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4079208' 00:05:05.404 killing process with pid 4079208 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 4079208 00:05:05.404 09:43:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 4079208 00:05:05.668 00:05:05.668 real 0m1.428s 00:05:05.668 user 0m3.980s 00:05:05.668 sys 0m0.399s 00:05:05.668 09:43:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.668 09:43:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.668 ************************************ 00:05:05.668 END TEST locking_overlapped_coremask 00:05:05.668 ************************************ 00:05:05.668 09:43:15 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:05.668 09:43:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.668 09:43:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.668 09:43:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.668 ************************************ 00:05:05.668 START TEST locking_overlapped_coremask_via_rpc 00:05:05.668 ************************************ 00:05:05.668 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:05.668 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4079490 00:05:05.668 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4079490 /var/tmp/spdk.sock 00:05:05.668 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:05.668 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4079490 ']' 00:05:05.668 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.668 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.668 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.668 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.668 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.668 [2024-12-11 09:43:15.162508] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:05:05.668 [2024-12-11 09:43:15.162552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4079490 ] 00:05:05.668 [2024-12-11 09:43:15.242348] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:05.668 [2024-12-11 09:43:15.242378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:05.926 [2024-12-11 09:43:15.285517] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.926 [2024-12-11 09:43:15.285625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.926 [2024-12-11 09:43:15.285626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.926 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.926 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:05.926 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4079695 00:05:05.926 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4079695 /var/tmp/spdk2.sock 00:05:05.926 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:05.926 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4079695 ']' 00:05:05.926 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.926 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.926 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.926 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.926 09:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.185 [2024-12-11 09:43:15.544909] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:05:06.185 [2024-12-11 09:43:15.544961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4079695 ] 00:05:06.185 [2024-12-11 09:43:15.647078] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:06.185 [2024-12-11 09:43:15.647109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:06.185 [2024-12-11 09:43:15.735622] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.185 [2024-12-11 09:43:15.735742] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.185 [2024-12-11 09:43:15.735743] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.120 [2024-12-11 09:43:16.404288] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4079490 has claimed it. 00:05:07.120 request: 00:05:07.120 { 00:05:07.120 "method": "framework_enable_cpumask_locks", 00:05:07.120 "req_id": 1 00:05:07.120 } 00:05:07.120 Got JSON-RPC error response 00:05:07.120 response: 00:05:07.120 { 00:05:07.120 "code": -32603, 00:05:07.120 "message": "Failed to claim CPU core: 2" 00:05:07.120 } 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4079490 /var/tmp/spdk.sock 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4079490 ']' 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4079695 /var/tmp/spdk2.sock 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4079695 ']' 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.120 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.378 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.378 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.378 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:07.378 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:07.378 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:07.378 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:07.378 00:05:07.378 real 0m1.744s 00:05:07.378 user 0m0.859s 00:05:07.378 sys 0m0.134s 00:05:07.378 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.379 09:43:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.379 ************************************ 00:05:07.379 END TEST locking_overlapped_coremask_via_rpc 00:05:07.379 ************************************ 00:05:07.379 09:43:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:07.379 09:43:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4079490 ]] 00:05:07.379 09:43:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4079490 00:05:07.379 09:43:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4079490 ']' 00:05:07.379 09:43:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4079490 00:05:07.379 09:43:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:07.379 09:43:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.379 09:43:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4079490 00:05:07.379 09:43:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.379 09:43:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.379 09:43:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4079490' 00:05:07.379 killing process with pid 4079490 00:05:07.379 09:43:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 4079490 00:05:07.379 09:43:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 4079490 00:05:07.946 09:43:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4079695 ]] 00:05:07.946 09:43:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4079695 00:05:07.946 09:43:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4079695 ']' 00:05:07.946 09:43:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4079695 00:05:07.946 09:43:17 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:07.946 09:43:17 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.946 09:43:17 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4079695 00:05:07.946 09:43:17 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:07.946 09:43:17 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:07.946 09:43:17 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4079695' 00:05:07.946 killing process with pid 4079695 00:05:07.946 09:43:17 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 4079695 00:05:07.946 09:43:17 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 4079695 00:05:08.205 09:43:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:08.205 09:43:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:08.205 09:43:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4079490 ]] 00:05:08.205 09:43:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4079490 00:05:08.205 09:43:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4079490 ']' 00:05:08.205 09:43:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4079490 00:05:08.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4079490) - No such process 00:05:08.205 09:43:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 4079490 is not found' 00:05:08.205 Process with pid 4079490 is not found 00:05:08.205 09:43:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4079695 ]] 00:05:08.205 09:43:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4079695 00:05:08.205 09:43:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4079695 ']' 00:05:08.205 09:43:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4079695 00:05:08.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4079695) - No such process 00:05:08.205 09:43:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 4079695 is not found' 00:05:08.205 Process with pid 4079695 is not found 00:05:08.205 09:43:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:08.205 00:05:08.205 real 0m15.146s 00:05:08.205 user 0m25.894s 00:05:08.205 sys 0m5.097s 00:05:08.205 09:43:17 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.205 09:43:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.205 ************************************ 00:05:08.205 END TEST cpu_locks 00:05:08.205 ************************************ 00:05:08.205 00:05:08.205 real 0m40.225s 00:05:08.205 user 1m16.551s 00:05:08.205 sys 0m8.573s 00:05:08.205 09:43:17 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.205 09:43:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.205 ************************************ 00:05:08.205 END TEST event 00:05:08.205 ************************************ 00:05:08.205 09:43:17 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:08.205 09:43:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.205 09:43:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.205 09:43:17 -- common/autotest_common.sh@10 -- # set +x 00:05:08.205 ************************************ 00:05:08.205 START TEST thread 00:05:08.205 ************************************ 00:05:08.205 09:43:17 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:08.464 * Looking for test storage... 00:05:08.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:08.464 09:43:17 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.464 09:43:17 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.464 09:43:17 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:08.464 09:43:17 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.464 09:43:17 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.464 09:43:17 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.464 09:43:17 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.464 09:43:17 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.464 09:43:17 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.464 09:43:17 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.464 09:43:17 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.464 09:43:17 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.464 09:43:17 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.464 09:43:17 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.464 09:43:17 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.464 09:43:17 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:08.464 09:43:17 thread -- scripts/common.sh@345 -- # : 1 00:05:08.464 09:43:17 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.464 09:43:17 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.464 09:43:17 thread -- scripts/common.sh@365 -- # decimal 1 00:05:08.464 09:43:17 thread -- scripts/common.sh@353 -- # local d=1 00:05:08.464 09:43:17 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.464 09:43:17 thread -- scripts/common.sh@355 -- # echo 1 00:05:08.464 09:43:17 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.464 09:43:17 thread -- scripts/common.sh@366 -- # decimal 2 00:05:08.464 09:43:17 thread -- scripts/common.sh@353 -- # local d=2 00:05:08.464 09:43:17 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.464 09:43:17 thread -- scripts/common.sh@355 -- # echo 2 00:05:08.464 09:43:17 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.464 09:43:17 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.464 09:43:17 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.464 09:43:17 thread -- scripts/common.sh@368 -- # return 0 00:05:08.464 09:43:17 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.464 09:43:17 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:08.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.464 --rc genhtml_branch_coverage=1 00:05:08.464 --rc genhtml_function_coverage=1 00:05:08.464 --rc genhtml_legend=1 00:05:08.464 --rc geninfo_all_blocks=1 00:05:08.464 --rc geninfo_unexecuted_blocks=1 00:05:08.464 00:05:08.464 ' 00:05:08.464 09:43:17 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:08.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.464 --rc genhtml_branch_coverage=1 00:05:08.464 --rc genhtml_function_coverage=1 00:05:08.464 --rc genhtml_legend=1 00:05:08.464 --rc geninfo_all_blocks=1 00:05:08.464 --rc geninfo_unexecuted_blocks=1 00:05:08.464 00:05:08.464 ' 00:05:08.464 09:43:17 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:08.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.464 --rc genhtml_branch_coverage=1 00:05:08.464 --rc genhtml_function_coverage=1 00:05:08.464 --rc genhtml_legend=1 00:05:08.464 --rc geninfo_all_blocks=1 00:05:08.464 --rc geninfo_unexecuted_blocks=1 00:05:08.464 00:05:08.464 ' 00:05:08.464 09:43:17 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:08.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.464 --rc genhtml_branch_coverage=1 00:05:08.464 --rc genhtml_function_coverage=1 00:05:08.464 --rc genhtml_legend=1 00:05:08.464 --rc geninfo_all_blocks=1 00:05:08.464 --rc geninfo_unexecuted_blocks=1 00:05:08.464 00:05:08.464 ' 00:05:08.464 09:43:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:08.464 09:43:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:08.464 09:43:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.464 09:43:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.464 ************************************ 00:05:08.464 START TEST thread_poller_perf 00:05:08.464 ************************************ 00:05:08.464 09:43:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:08.464 [2024-12-11 09:43:17.947336] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:05:08.464 [2024-12-11 09:43:17.947397] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4080108 ] 00:05:08.464 [2024-12-11 09:43:18.031849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.723 [2024-12-11 09:43:18.070819] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.723 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:09.659 [2024-12-11T08:43:19.234Z] ====================================== 00:05:09.659 [2024-12-11T08:43:19.234Z] busy:2107889182 (cyc) 00:05:09.659 [2024-12-11T08:43:19.234Z] total_run_count: 423000 00:05:09.659 [2024-12-11T08:43:19.234Z] tsc_hz: 2100000000 (cyc) 00:05:09.659 [2024-12-11T08:43:19.234Z] ====================================== 00:05:09.659 [2024-12-11T08:43:19.234Z] poller_cost: 4983 (cyc), 2372 (nsec) 00:05:09.659 00:05:09.659 real 0m1.187s 00:05:09.659 user 0m1.104s 00:05:09.659 sys 0m0.079s 00:05:09.659 09:43:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.659 09:43:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.659 ************************************ 00:05:09.659 END TEST thread_poller_perf 00:05:09.659 ************************************ 00:05:09.659 09:43:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:09.659 09:43:19 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:09.659 09:43:19 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.659 09:43:19 thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.659 ************************************ 00:05:09.659 START TEST thread_poller_perf 00:05:09.659 ************************************ 00:05:09.659 09:43:19 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:09.659 [2024-12-11 09:43:19.201416] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:05:09.659 [2024-12-11 09:43:19.201480] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4080296 ] 00:05:09.918 [2024-12-11 09:43:19.286187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.918 [2024-12-11 09:43:19.324726] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.918 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:10.852 [2024-12-11T08:43:20.427Z] ====================================== 00:05:10.852 [2024-12-11T08:43:20.427Z] busy:2101646158 (cyc) 00:05:10.852 [2024-12-11T08:43:20.427Z] total_run_count: 5111000 00:05:10.852 [2024-12-11T08:43:20.427Z] tsc_hz: 2100000000 (cyc) 00:05:10.852 [2024-12-11T08:43:20.427Z] ====================================== 00:05:10.852 [2024-12-11T08:43:20.427Z] poller_cost: 411 (cyc), 195 (nsec) 00:05:10.852 00:05:10.852 real 0m1.186s 00:05:10.852 user 0m1.099s 00:05:10.852 sys 0m0.082s 00:05:10.852 09:43:20 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.852 09:43:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:10.852 ************************************ 00:05:10.852 END TEST thread_poller_perf 00:05:10.852 ************************************ 00:05:10.852 09:43:20 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:10.852 00:05:10.852 real 0m2.685s 00:05:10.852 user 0m2.358s 00:05:10.852 sys 0m0.339s 00:05:10.852 09:43:20 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.852 09:43:20 thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.852 ************************************ 00:05:10.852 END TEST thread 00:05:10.852 ************************************ 00:05:11.111 09:43:20 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:11.111 09:43:20 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:11.111 09:43:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.111 09:43:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.111 09:43:20 -- common/autotest_common.sh@10 -- # set +x 00:05:11.111 ************************************ 00:05:11.111 START TEST app_cmdline 00:05:11.111 ************************************ 00:05:11.111 09:43:20 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:11.111 * Looking for test storage... 00:05:11.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:11.111 09:43:20 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:11.111 09:43:20 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:11.111 09:43:20 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:11.111 09:43:20 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.111 09:43:20 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:11.111 09:43:20 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.111 09:43:20 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:11.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.111 --rc genhtml_branch_coverage=1 00:05:11.111 --rc genhtml_function_coverage=1 00:05:11.111 --rc genhtml_legend=1 00:05:11.111 --rc geninfo_all_blocks=1 00:05:11.111 --rc geninfo_unexecuted_blocks=1 00:05:11.111 00:05:11.111 ' 00:05:11.111 09:43:20 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:11.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.111 --rc genhtml_branch_coverage=1 00:05:11.111 --rc genhtml_function_coverage=1 00:05:11.111 --rc genhtml_legend=1 00:05:11.111 --rc geninfo_all_blocks=1 00:05:11.111 --rc geninfo_unexecuted_blocks=1 00:05:11.111 00:05:11.111 ' 00:05:11.111 09:43:20 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:11.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.111 --rc genhtml_branch_coverage=1 00:05:11.111 --rc genhtml_function_coverage=1 00:05:11.111 --rc genhtml_legend=1 00:05:11.111 --rc geninfo_all_blocks=1 00:05:11.111 --rc geninfo_unexecuted_blocks=1 00:05:11.111 00:05:11.111 ' 00:05:11.111 09:43:20 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:11.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.111 --rc genhtml_branch_coverage=1 00:05:11.111 --rc genhtml_function_coverage=1 00:05:11.111 --rc genhtml_legend=1 00:05:11.111 --rc geninfo_all_blocks=1 00:05:11.111 --rc geninfo_unexecuted_blocks=1 00:05:11.111 00:05:11.111 ' 00:05:11.111 09:43:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:11.111 09:43:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4080624 00:05:11.111 09:43:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4080624 00:05:11.111 09:43:20 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:11.111 09:43:20 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 4080624 ']' 00:05:11.111 09:43:20 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.111 09:43:20 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.111 09:43:20 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.111 09:43:20 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.111 09:43:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:11.369 [2024-12-11 09:43:20.706144] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:05:11.369 [2024-12-11 09:43:20.706190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4080624 ] 00:05:11.369 [2024-12-11 09:43:20.784613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.369 [2024-12-11 09:43:20.825105] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.628 09:43:21 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.628 09:43:21 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:11.628 09:43:21 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:11.886 { 00:05:11.886 "version": "SPDK v25.01-pre git sha1 7e2e68263", 00:05:11.886 "fields": { 00:05:11.886 "major": 25, 00:05:11.886 "minor": 1, 00:05:11.886 "patch": 0, 00:05:11.886 "suffix": "-pre", 00:05:11.886 "commit": "7e2e68263" 00:05:11.886 } 00:05:11.886 } 00:05:11.886 09:43:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:11.886 09:43:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:11.886 09:43:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:11.886 09:43:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:11.886 09:43:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:11.886 09:43:21 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.886 09:43:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:11.886 09:43:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:11.886 09:43:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:11.886 09:43:21 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.886 09:43:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:11.886 09:43:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:11.886 09:43:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:11.886 09:43:21 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:11.886 09:43:21 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:11.886 09:43:21 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:11.886 09:43:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.886 09:43:21 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:11.886 09:43:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.887 09:43:21 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:11.887 09:43:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.887 09:43:21 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:11.887 09:43:21 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:11.887 09:43:21 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:12.145 request: 00:05:12.145 { 00:05:12.145 "method": "env_dpdk_get_mem_stats", 00:05:12.145 "req_id": 1 00:05:12.145 } 00:05:12.145 Got JSON-RPC error response 00:05:12.145 response: 00:05:12.145 { 00:05:12.145 "code": -32601, 00:05:12.145 "message": "Method not found" 00:05:12.145 } 00:05:12.145 09:43:21 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:12.145 09:43:21 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:12.146 09:43:21 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:12.146 09:43:21 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:12.146 09:43:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4080624 00:05:12.146 09:43:21 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 4080624 ']' 00:05:12.146 09:43:21 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 4080624 00:05:12.146 09:43:21 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:12.146 09:43:21 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.146 09:43:21 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4080624 00:05:12.146 09:43:21 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.146 09:43:21 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.146 09:43:21 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4080624' 00:05:12.146 killing process with pid 4080624 00:05:12.146 09:43:21 app_cmdline -- common/autotest_common.sh@973 -- # kill 4080624 00:05:12.146 09:43:21 app_cmdline -- common/autotest_common.sh@978 -- # wait 4080624 00:05:12.405 00:05:12.405 real 0m1.350s 00:05:12.405 user 0m1.596s 00:05:12.405 sys 0m0.434s 00:05:12.405 09:43:21 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.405 09:43:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:12.405 ************************************ 00:05:12.405 END TEST app_cmdline 00:05:12.405 ************************************ 00:05:12.405 09:43:21 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:12.405 09:43:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.405 09:43:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.405 09:43:21 -- common/autotest_common.sh@10 -- # set +x 00:05:12.405 ************************************ 00:05:12.405 START TEST version 00:05:12.405 ************************************ 00:05:12.405 09:43:21 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:12.405 * Looking for test storage... 00:05:12.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:12.665 09:43:21 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:12.665 09:43:21 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:12.665 09:43:21 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:12.665 09:43:22 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:12.665 09:43:22 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.665 09:43:22 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.665 09:43:22 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.665 09:43:22 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.665 09:43:22 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.665 09:43:22 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.665 09:43:22 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.665 09:43:22 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.665 09:43:22 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.665 09:43:22 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.665 09:43:22 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.665 09:43:22 version -- scripts/common.sh@344 -- # case "$op" in 00:05:12.665 09:43:22 version -- scripts/common.sh@345 -- # : 1 00:05:12.665 09:43:22 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.665 09:43:22 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.665 09:43:22 version -- scripts/common.sh@365 -- # decimal 1 00:05:12.665 09:43:22 version -- scripts/common.sh@353 -- # local d=1 00:05:12.665 09:43:22 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.665 09:43:22 version -- scripts/common.sh@355 -- # echo 1 00:05:12.665 09:43:22 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.665 09:43:22 version -- scripts/common.sh@366 -- # decimal 2 00:05:12.665 09:43:22 version -- scripts/common.sh@353 -- # local d=2 00:05:12.665 09:43:22 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.665 09:43:22 version -- scripts/common.sh@355 -- # echo 2 00:05:12.665 09:43:22 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.665 09:43:22 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.665 09:43:22 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.665 09:43:22 version -- scripts/common.sh@368 -- # return 0 00:05:12.665 09:43:22 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.665 09:43:22 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:12.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.665 --rc genhtml_branch_coverage=1 00:05:12.665 --rc genhtml_function_coverage=1 00:05:12.665 --rc genhtml_legend=1 00:05:12.665 --rc geninfo_all_blocks=1 00:05:12.665 --rc geninfo_unexecuted_blocks=1 00:05:12.665 00:05:12.665 ' 00:05:12.665 09:43:22 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:12.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.665 --rc genhtml_branch_coverage=1 00:05:12.665 --rc genhtml_function_coverage=1 00:05:12.665 --rc genhtml_legend=1 00:05:12.665 --rc geninfo_all_blocks=1 00:05:12.665 --rc geninfo_unexecuted_blocks=1 00:05:12.665 00:05:12.665 ' 00:05:12.665 09:43:22 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:12.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.665 --rc genhtml_branch_coverage=1 00:05:12.665 --rc genhtml_function_coverage=1 00:05:12.665 --rc genhtml_legend=1 00:05:12.665 --rc geninfo_all_blocks=1 00:05:12.665 --rc geninfo_unexecuted_blocks=1 00:05:12.665 00:05:12.665 ' 00:05:12.665 09:43:22 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:12.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.665 --rc genhtml_branch_coverage=1 00:05:12.665 --rc genhtml_function_coverage=1 00:05:12.665 --rc genhtml_legend=1 00:05:12.665 --rc geninfo_all_blocks=1 00:05:12.665 --rc geninfo_unexecuted_blocks=1 00:05:12.665 00:05:12.665 ' 00:05:12.665 09:43:22 version -- app/version.sh@17 -- # get_header_version major 00:05:12.665 09:43:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:12.665 09:43:22 version -- app/version.sh@14 -- # cut -f2 00:05:12.665 09:43:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:12.665 09:43:22 version -- app/version.sh@17 -- # major=25 00:05:12.665 09:43:22 version -- app/version.sh@18 -- # get_header_version minor 00:05:12.665 09:43:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:12.665 09:43:22 version -- app/version.sh@14 -- # cut -f2 00:05:12.665 09:43:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:12.665 09:43:22 version -- app/version.sh@18 -- # minor=1 00:05:12.665 09:43:22 version -- app/version.sh@19 -- # get_header_version patch 00:05:12.665 09:43:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:12.665 09:43:22 version -- app/version.sh@14 -- # cut -f2 00:05:12.665 09:43:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:12.665 09:43:22 version -- app/version.sh@19 -- # patch=0 00:05:12.665 09:43:22 version -- app/version.sh@20 -- # get_header_version suffix 00:05:12.665 09:43:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:12.665 09:43:22 version -- app/version.sh@14 -- # cut -f2 00:05:12.665 09:43:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:12.665 09:43:22 version -- app/version.sh@20 -- # suffix=-pre 00:05:12.665 09:43:22 version -- app/version.sh@22 -- # version=25.1 00:05:12.665 09:43:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:12.665 09:43:22 version -- app/version.sh@28 -- # version=25.1rc0 00:05:12.665 09:43:22 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:12.665 09:43:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:12.665 09:43:22 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:12.665 09:43:22 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:12.665 00:05:12.665 real 0m0.248s 00:05:12.665 user 0m0.147s 00:05:12.665 sys 0m0.145s 00:05:12.665 09:43:22 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.665 09:43:22 version -- common/autotest_common.sh@10 -- # set +x 00:05:12.665 ************************************ 00:05:12.665 END TEST version 00:05:12.665 ************************************ 00:05:12.665 09:43:22 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:12.665 09:43:22 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:12.665 09:43:22 -- spdk/autotest.sh@194 -- # uname -s 00:05:12.665 09:43:22 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:12.665 09:43:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:12.665 09:43:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:12.665 09:43:22 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:12.665 09:43:22 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:12.665 09:43:22 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:12.665 09:43:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.665 09:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:12.665 09:43:22 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:12.665 09:43:22 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:12.665 09:43:22 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:12.665 09:43:22 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:12.665 09:43:22 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:12.665 09:43:22 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:12.665 09:43:22 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:12.665 09:43:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:12.665 09:43:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.665 09:43:22 -- common/autotest_common.sh@10 -- # set +x 00:05:12.924 ************************************ 00:05:12.924 START TEST nvmf_tcp 00:05:12.924 ************************************ 00:05:12.924 09:43:22 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:12.924 * Looking for test storage... 00:05:12.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:12.924 09:43:22 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:12.924 09:43:22 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:12.924 09:43:22 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:12.924 09:43:22 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:12.924 09:43:22 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.924 09:43:22 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.924 09:43:22 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.924 09:43:22 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.924 09:43:22 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.924 09:43:22 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.924 09:43:22 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.924 09:43:22 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.924 09:43:22 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.924 09:43:22 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.925 09:43:22 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.925 09:43:22 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:12.925 09:43:22 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:12.925 09:43:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.925 09:43:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.925 09:43:22 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:12.925 09:43:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:12.925 09:43:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.925 09:43:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:12.925 09:43:22 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.925 09:43:22 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:12.925 09:43:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:12.925 09:43:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.925 09:43:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:12.925 09:43:22 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.925 09:43:22 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.925 09:43:22 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.925 09:43:22 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:12.925 09:43:22 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.925 09:43:22 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:12.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.925 --rc genhtml_branch_coverage=1 00:05:12.925 --rc genhtml_function_coverage=1 00:05:12.925 --rc genhtml_legend=1 00:05:12.925 --rc geninfo_all_blocks=1 00:05:12.925 --rc geninfo_unexecuted_blocks=1 00:05:12.925 00:05:12.925 ' 00:05:12.925 09:43:22 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:12.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.925 --rc genhtml_branch_coverage=1 00:05:12.925 --rc genhtml_function_coverage=1 00:05:12.925 --rc genhtml_legend=1 00:05:12.925 --rc geninfo_all_blocks=1 00:05:12.925 --rc geninfo_unexecuted_blocks=1 00:05:12.925 00:05:12.925 ' 00:05:12.925 09:43:22 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:12.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.925 --rc genhtml_branch_coverage=1 00:05:12.925 --rc genhtml_function_coverage=1 00:05:12.925 --rc genhtml_legend=1 00:05:12.925 --rc geninfo_all_blocks=1 00:05:12.925 --rc geninfo_unexecuted_blocks=1 00:05:12.925 00:05:12.925 ' 00:05:12.925 09:43:22 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:12.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.925 --rc genhtml_branch_coverage=1 00:05:12.925 --rc genhtml_function_coverage=1 00:05:12.925 --rc genhtml_legend=1 00:05:12.925 --rc geninfo_all_blocks=1 00:05:12.925 --rc geninfo_unexecuted_blocks=1 00:05:12.925 00:05:12.925 ' 00:05:12.925 09:43:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:12.925 09:43:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:12.925 09:43:22 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:12.925 09:43:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:12.925 09:43:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.925 09:43:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.925 ************************************ 00:05:12.925 START TEST nvmf_target_core 00:05:12.925 ************************************ 00:05:12.925 09:43:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:13.185 * Looking for test storage... 00:05:13.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:13.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.185 --rc genhtml_branch_coverage=1 00:05:13.185 --rc genhtml_function_coverage=1 00:05:13.185 --rc genhtml_legend=1 00:05:13.185 --rc geninfo_all_blocks=1 00:05:13.185 --rc geninfo_unexecuted_blocks=1 00:05:13.185 00:05:13.185 ' 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:13.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.185 --rc genhtml_branch_coverage=1 00:05:13.185 --rc genhtml_function_coverage=1 00:05:13.185 --rc genhtml_legend=1 00:05:13.185 --rc geninfo_all_blocks=1 00:05:13.185 --rc geninfo_unexecuted_blocks=1 00:05:13.185 00:05:13.185 ' 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:13.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.185 --rc genhtml_branch_coverage=1 00:05:13.185 --rc genhtml_function_coverage=1 00:05:13.185 --rc genhtml_legend=1 00:05:13.185 --rc geninfo_all_blocks=1 00:05:13.185 --rc geninfo_unexecuted_blocks=1 00:05:13.185 00:05:13.185 ' 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:13.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.185 --rc genhtml_branch_coverage=1 00:05:13.185 --rc genhtml_function_coverage=1 00:05:13.185 --rc genhtml_legend=1 00:05:13.185 --rc geninfo_all_blocks=1 00:05:13.185 --rc geninfo_unexecuted_blocks=1 00:05:13.185 00:05:13.185 ' 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:13.185 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:13.186 ************************************ 00:05:13.186 START TEST nvmf_abort 00:05:13.186 ************************************ 00:05:13.186 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:13.446 * Looking for test storage... 00:05:13.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:13.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.446 --rc genhtml_branch_coverage=1 00:05:13.446 --rc genhtml_function_coverage=1 00:05:13.446 --rc genhtml_legend=1 00:05:13.446 --rc geninfo_all_blocks=1 00:05:13.446 --rc geninfo_unexecuted_blocks=1 00:05:13.446 00:05:13.446 ' 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:13.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.446 --rc genhtml_branch_coverage=1 00:05:13.446 --rc genhtml_function_coverage=1 00:05:13.446 --rc genhtml_legend=1 00:05:13.446 --rc geninfo_all_blocks=1 00:05:13.446 --rc geninfo_unexecuted_blocks=1 00:05:13.446 00:05:13.446 ' 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:13.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.446 --rc genhtml_branch_coverage=1 00:05:13.446 --rc genhtml_function_coverage=1 00:05:13.446 --rc genhtml_legend=1 00:05:13.446 --rc geninfo_all_blocks=1 00:05:13.446 --rc geninfo_unexecuted_blocks=1 00:05:13.446 00:05:13.446 ' 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:13.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.446 --rc genhtml_branch_coverage=1 00:05:13.446 --rc genhtml_function_coverage=1 00:05:13.446 --rc genhtml_legend=1 00:05:13.446 --rc geninfo_all_blocks=1 00:05:13.446 --rc geninfo_unexecuted_blocks=1 00:05:13.446 00:05:13.446 ' 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.446 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:13.447 09:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:20.018 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:20.018 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:20.018 Found net devices under 0000:af:00.0: cvl_0_0 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:20.018 Found net devices under 0000:af:00.1: cvl_0_1 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:20.018 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:20.019 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:20.019 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:20.019 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:20.019 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:20.019 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:20.019 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:20.019 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:20.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:20.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:05:20.278 00:05:20.278 --- 10.0.0.2 ping statistics --- 00:05:20.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:20.278 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:20.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:20.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:05:20.278 00:05:20.278 --- 10.0.0.1 ping statistics --- 00:05:20.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:20.278 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=4084743 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 4084743 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 4084743 ']' 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.278 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:20.537 [2024-12-11 09:43:29.861842] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:05:20.537 [2024-12-11 09:43:29.861885] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:20.537 [2024-12-11 09:43:29.944107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.537 [2024-12-11 09:43:29.984233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:20.537 [2024-12-11 09:43:29.984269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:20.537 [2024-12-11 09:43:29.984276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:20.537 [2024-12-11 09:43:29.984281] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:20.537 [2024-12-11 09:43:29.984286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:20.537 [2024-12-11 09:43:29.985673] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.537 [2024-12-11 09:43:29.985777] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.537 [2024-12-11 09:43:29.985778] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:21.472 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.473 [2024-12-11 09:43:30.750992] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.473 Malloc0 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.473 Delay0 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.473 [2024-12-11 09:43:30.837730] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.473 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:21.473 [2024-12-11 09:43:30.971441] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:24.007 Initializing NVMe Controllers 00:05:24.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:24.007 controller IO queue size 128 less than required 00:05:24.007 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:24.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:24.007 Initialization complete. Launching workers. 00:05:24.007 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37372 00:05:24.007 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37437, failed to submit 62 00:05:24.007 success 37376, unsuccessful 61, failed 0 00:05:24.007 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:24.007 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.007 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.007 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:24.008 rmmod nvme_tcp 00:05:24.008 rmmod nvme_fabrics 00:05:24.008 rmmod nvme_keyring 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 4084743 ']' 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 4084743 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 4084743 ']' 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 4084743 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4084743 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4084743' 00:05:24.008 killing process with pid 4084743 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 4084743 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 4084743 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:24.008 09:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:25.913 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:25.913 00:05:25.913 real 0m12.726s 00:05:25.913 user 0m14.007s 00:05:25.913 sys 0m6.101s 00:05:25.913 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.913 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.913 ************************************ 00:05:25.913 END TEST nvmf_abort 00:05:25.913 ************************************ 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:26.173 ************************************ 00:05:26.173 START TEST nvmf_ns_hotplug_stress 00:05:26.173 ************************************ 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:26.173 * Looking for test storage... 00:05:26.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:26.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.173 --rc genhtml_branch_coverage=1 00:05:26.173 --rc genhtml_function_coverage=1 00:05:26.173 --rc genhtml_legend=1 00:05:26.173 --rc geninfo_all_blocks=1 00:05:26.173 --rc geninfo_unexecuted_blocks=1 00:05:26.173 00:05:26.173 ' 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:26.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.173 --rc genhtml_branch_coverage=1 00:05:26.173 --rc genhtml_function_coverage=1 00:05:26.173 --rc genhtml_legend=1 00:05:26.173 --rc geninfo_all_blocks=1 00:05:26.173 --rc geninfo_unexecuted_blocks=1 00:05:26.173 00:05:26.173 ' 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:26.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.173 --rc genhtml_branch_coverage=1 00:05:26.173 --rc genhtml_function_coverage=1 00:05:26.173 --rc genhtml_legend=1 00:05:26.173 --rc geninfo_all_blocks=1 00:05:26.173 --rc geninfo_unexecuted_blocks=1 00:05:26.173 00:05:26.173 ' 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:26.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.173 --rc genhtml_branch_coverage=1 00:05:26.173 --rc genhtml_function_coverage=1 00:05:26.173 --rc genhtml_legend=1 00:05:26.173 --rc geninfo_all_blocks=1 00:05:26.173 --rc geninfo_unexecuted_blocks=1 00:05:26.173 00:05:26.173 ' 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:26.173 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:26.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:26.174 09:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:32.745 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:32.745 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:32.745 Found net devices under 0000:af:00.0: cvl_0_0 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:32.745 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:32.746 Found net devices under 0000:af:00.1: cvl_0_1 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:32.746 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:33.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:33.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:05:33.005 00:05:33.005 --- 10.0.0.2 ping statistics --- 00:05:33.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:33.005 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:33.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:33.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:05:33.005 00:05:33.005 --- 10.0.0.1 ping statistics --- 00:05:33.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:33.005 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.005 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:33.264 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=4089245 00:05:33.264 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 4089245 00:05:33.264 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:33.264 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 4089245 ']' 00:05:33.264 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.264 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.264 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.264 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.264 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:33.264 [2024-12-11 09:43:42.636583] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:05:33.264 [2024-12-11 09:43:42.636633] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:33.264 [2024-12-11 09:43:42.722281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.264 [2024-12-11 09:43:42.762394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:33.264 [2024-12-11 09:43:42.762429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:33.264 [2024-12-11 09:43:42.762435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:33.264 [2024-12-11 09:43:42.762442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:33.264 [2024-12-11 09:43:42.762447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:33.264 [2024-12-11 09:43:42.763841] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.264 [2024-12-11 09:43:42.763946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.264 [2024-12-11 09:43:42.763947] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.200 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.200 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:34.200 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:34.200 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:34.200 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:34.200 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:34.200 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:34.200 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:34.200 [2024-12-11 09:43:43.670868] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.200 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:34.459 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:34.718 [2024-12-11 09:43:44.060272] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:34.718 09:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:34.718 09:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:34.976 Malloc0 00:05:34.976 09:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:35.235 Delay0 00:05:35.235 09:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.494 09:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:35.494 NULL1 00:05:35.755 09:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:35.755 09:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4089730 00:05:35.755 09:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:35.755 09:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:35.755 09:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.015 09:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.273 09:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:36.273 09:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:36.532 true 00:05:36.532 09:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:36.532 09:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.532 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.790 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:36.790 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:37.048 true 00:05:37.048 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:37.048 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.423 Read completed with error (sct=0, sc=11) 00:05:38.423 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.423 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:38.423 09:43:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:38.682 true 00:05:38.682 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:38.682 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.618 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.618 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:39.618 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:39.876 true 00:05:39.876 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:39.876 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.134 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.134 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:40.134 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:40.393 true 00:05:40.393 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:40.393 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.770 09:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.770 09:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:41.770 09:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:42.029 true 00:05:42.029 09:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:42.029 09:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.965 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.965 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:42.965 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:43.223 true 00:05:43.223 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:43.223 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.482 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.482 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:43.482 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:43.785 true 00:05:43.785 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:43.785 09:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.029 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.030 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:45.030 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:45.288 true 00:05:45.288 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:45.288 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.223 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.223 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:46.223 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:46.482 true 00:05:46.482 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:46.482 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.741 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.000 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:47.000 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:47.000 true 00:05:47.000 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:47.000 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.377 09:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.377 09:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:48.377 09:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:48.635 true 00:05:48.635 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:48.636 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.571 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.571 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:49.571 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:49.830 true 00:05:49.830 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:49.830 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.088 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.347 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:50.347 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:50.347 true 00:05:50.347 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:50.347 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.723 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.724 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:51.724 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:51.982 true 00:05:51.982 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:51.982 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.917 09:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.917 09:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:52.917 09:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:53.175 true 00:05:53.175 09:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:53.175 09:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.434 09:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.434 09:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:53.434 09:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:53.692 true 00:05:53.692 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:53.692 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.068 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.068 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:55.068 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:55.068 true 00:05:55.068 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:55.068 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.327 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.585 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:55.585 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:55.843 true 00:05:55.843 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:55.843 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.220 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.220 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:57.220 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:57.220 true 00:05:57.220 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:57.220 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.478 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.737 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:57.737 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:57.995 true 00:05:57.995 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:57.995 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.931 09:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.190 09:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:59.190 09:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:59.449 true 00:05:59.449 09:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:05:59.449 09:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.384 09:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.384 09:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:00.384 09:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:00.643 true 00:06:00.643 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:06:00.643 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.901 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.159 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:01.159 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:01.159 true 00:06:01.159 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:06:01.159 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.536 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.536 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:02.536 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:02.536 true 00:06:02.795 09:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:06:02.795 09:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.795 09:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.054 09:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:03.054 09:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:03.312 true 00:06:03.312 09:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:06:03.312 09:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.249 09:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.507 09:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:04.507 09:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:04.507 true 00:06:04.766 09:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:06:04.766 09:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.766 09:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.025 09:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:05.025 09:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:05.284 true 00:06:05.284 09:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:06:05.284 09:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.220 Initializing NVMe Controllers 00:06:06.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:06.220 Controller IO queue size 128, less than required. 00:06:06.220 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:06.220 Controller IO queue size 128, less than required. 00:06:06.220 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:06.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:06.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:06.220 Initialization complete. Launching workers. 00:06:06.220 ======================================================== 00:06:06.220 Latency(us) 00:06:06.220 Device Information : IOPS MiB/s Average min max 00:06:06.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1545.13 0.75 52899.30 2751.91 1012277.11 00:06:06.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16936.70 8.27 7535.54 2359.71 368596.37 00:06:06.220 ======================================================== 00:06:06.220 Total : 18481.83 9.02 11328.08 2359.71 1012277.11 00:06:06.220 00:06:06.220 09:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.479 09:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:06.479 09:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:06.738 true 00:06:06.738 09:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4089730 00:06:06.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4089730) - No such process 00:06:06.738 09:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4089730 00:06:06.738 09:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.996 09:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.996 09:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:06.996 09:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:06.996 09:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:06.996 09:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:06.996 09:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:07.254 null0 00:06:07.255 09:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:07.255 09:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:07.255 09:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:07.513 null1 00:06:07.513 09:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:07.513 09:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:07.513 09:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:07.772 null2 00:06:07.772 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:07.772 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:07.772 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:07.772 null3 00:06:07.772 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:07.772 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:07.772 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:08.030 null4 00:06:08.030 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.030 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.030 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:08.289 null5 00:06:08.289 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.289 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.289 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:08.547 null6 00:06:08.547 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.547 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.547 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:08.547 null7 00:06:08.547 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.547 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.547 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:08.547 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:08.547 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:08.547 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:08.547 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:08.547 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:08.547 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:08.547 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.547 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:08.547 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4095265 4095267 4095268 4095270 4095272 4095274 4095276 4095278 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:08.807 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.066 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.325 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:09.325 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:09.325 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.325 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:09.325 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:09.325 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:09.325 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:09.325 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.584 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:09.842 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:09.842 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:09.842 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.842 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:09.842 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:09.842 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:09.842 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:09.842 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:09.842 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.842 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.842 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:09.842 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.842 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.843 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.102 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.102 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.102 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.102 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.102 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.102 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:10.102 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:10.102 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.361 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:10.620 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.620 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.620 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.621 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.621 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.621 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.621 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:10.621 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:10.879 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.880 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.880 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.880 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.880 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:10.880 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.880 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.880 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.138 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.138 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.138 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.138 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.138 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.139 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.398 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.398 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.398 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:11.398 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.398 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.398 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.398 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.398 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.664 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.664 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.664 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.664 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.664 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.664 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.664 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.664 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.664 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.664 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.664 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.664 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.664 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.664 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.664 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.664 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.665 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.665 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.665 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.665 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.665 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.665 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.665 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.665 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.924 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.924 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.924 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.924 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.924 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.925 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.184 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.184 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.184 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.184 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.184 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.184 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.184 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.184 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.443 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.443 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.443 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.443 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.443 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.443 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:12.443 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.443 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.443 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:12.443 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.443 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.443 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.443 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.443 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.443 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.443 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.444 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.444 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:12.444 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.444 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.444 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.444 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.444 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.444 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:12.703 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.703 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.703 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.703 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.703 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.703 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.703 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.703 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:12.962 rmmod nvme_tcp 00:06:12.962 rmmod nvme_fabrics 00:06:12.962 rmmod nvme_keyring 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 4089245 ']' 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 4089245 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 4089245 ']' 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 4089245 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4089245 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4089245' 00:06:12.962 killing process with pid 4089245 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 4089245 00:06:12.962 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 4089245 00:06:13.221 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:13.221 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:13.221 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:13.222 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:13.222 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:13.222 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:13.222 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:13.222 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:13.222 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:13.222 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.222 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:13.222 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.126 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:15.126 00:06:15.126 real 0m49.166s 00:06:15.126 user 3m16.396s 00:06:15.126 sys 0m16.270s 00:06:15.126 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.126 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:15.126 ************************************ 00:06:15.126 END TEST nvmf_ns_hotplug_stress 00:06:15.126 ************************************ 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:15.385 ************************************ 00:06:15.385 START TEST nvmf_delete_subsystem 00:06:15.385 ************************************ 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:15.385 * Looking for test storage... 00:06:15.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.385 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:15.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.386 --rc genhtml_branch_coverage=1 00:06:15.386 --rc genhtml_function_coverage=1 00:06:15.386 --rc genhtml_legend=1 00:06:15.386 --rc geninfo_all_blocks=1 00:06:15.386 --rc geninfo_unexecuted_blocks=1 00:06:15.386 00:06:15.386 ' 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:15.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.386 --rc genhtml_branch_coverage=1 00:06:15.386 --rc genhtml_function_coverage=1 00:06:15.386 --rc genhtml_legend=1 00:06:15.386 --rc geninfo_all_blocks=1 00:06:15.386 --rc geninfo_unexecuted_blocks=1 00:06:15.386 00:06:15.386 ' 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:15.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.386 --rc genhtml_branch_coverage=1 00:06:15.386 --rc genhtml_function_coverage=1 00:06:15.386 --rc genhtml_legend=1 00:06:15.386 --rc geninfo_all_blocks=1 00:06:15.386 --rc geninfo_unexecuted_blocks=1 00:06:15.386 00:06:15.386 ' 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:15.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.386 --rc genhtml_branch_coverage=1 00:06:15.386 --rc genhtml_function_coverage=1 00:06:15.386 --rc genhtml_legend=1 00:06:15.386 --rc geninfo_all_blocks=1 00:06:15.386 --rc geninfo_unexecuted_blocks=1 00:06:15.386 00:06:15.386 ' 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.386 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:15.651 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:22.226 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:22.227 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:22.227 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:22.227 Found net devices under 0000:af:00.0: cvl_0_0 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:22.227 Found net devices under 0000:af:00.1: cvl_0_1 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:22.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:22.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:06:22.227 00:06:22.227 --- 10.0.0.2 ping statistics --- 00:06:22.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.227 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:22.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:22.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:06:22.227 00:06:22.227 --- 10.0.0.1 ping statistics --- 00:06:22.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.227 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:22.227 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:22.487 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.487 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=4100132 00:06:22.487 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 4100132 00:06:22.487 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:22.487 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 4100132 ']' 00:06:22.487 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.487 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.487 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.487 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.487 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.487 [2024-12-11 09:44:31.852971] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:06:22.487 [2024-12-11 09:44:31.853013] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.487 [2024-12-11 09:44:31.934326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.487 [2024-12-11 09:44:31.971960] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:22.487 [2024-12-11 09:44:31.971998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:22.487 [2024-12-11 09:44:31.972005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:22.487 [2024-12-11 09:44:31.972011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:22.487 [2024-12-11 09:44:31.972016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:22.487 [2024-12-11 09:44:31.973159] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.487 [2024-12-11 09:44:31.973160] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.746 [2024-12-11 09:44:32.117293] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.746 [2024-12-11 09:44:32.137508] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.746 NULL1 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.746 Delay0 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4100154 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:22.746 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:22.746 [2024-12-11 09:44:32.248408] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:24.650 09:44:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:24.650 09:44:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.650 09:44:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 [2024-12-11 09:44:34.365943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd67780 is same with the state(6) to be set 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 starting I/O failed: -6 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.910 Read completed with error (sct=0, sc=8) 00:06:24.910 Write completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 starting I/O failed: -6 00:06:24.911 starting I/O failed: -6 00:06:24.911 starting I/O failed: -6 00:06:24.911 starting I/O failed: -6 00:06:24.911 starting I/O failed: -6 00:06:24.911 starting I/O failed: -6 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 Write completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 starting I/O failed: -6 00:06:24.911 Read completed with error (sct=0, sc=8) 00:06:24.911 [2024-12-11 09:44:34.368795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f855c000c80 is same with the state(6) to be set 00:06:25.848 [2024-12-11 09:44:35.344110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd689b0 is same with the state(6) to be set 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 [2024-12-11 09:44:35.369316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd672c0 is same with the state(6) to be set 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 [2024-12-11 09:44:35.369802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd67b40 is same with the state(6) to be set 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 [2024-12-11 09:44:35.370409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f855c00d6c0 is same with the state(6) to be set 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Write completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 Read completed with error (sct=0, sc=8) 00:06:25.848 [2024-12-11 09:44:35.371067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f855c00d060 is same with the state(6) to be set 00:06:25.848 Initializing NVMe Controllers 00:06:25.848 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:25.848 Controller IO queue size 128, less than required. 00:06:25.848 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:25.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:25.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:25.848 Initialization complete. Launching workers. 00:06:25.848 ======================================================== 00:06:25.848 Latency(us) 00:06:25.848 Device Information : IOPS MiB/s Average min max 00:06:25.848 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.53 0.09 879740.17 306.04 1008554.79 00:06:25.848 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 181.50 0.09 932604.75 331.91 2000782.42 00:06:25.848 ======================================================== 00:06:25.848 Total : 358.03 0.17 906539.57 306.04 2000782.42 00:06:25.848 00:06:25.848 [2024-12-11 09:44:35.371602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd689b0 (9): Bad file descriptor 00:06:25.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:25.848 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.848 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:25.848 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4100154 00:06:25.848 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:26.416 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:26.416 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4100154 00:06:26.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4100154) - No such process 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4100154 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4100154 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 4100154 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.417 [2024-12-11 09:44:35.901809] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4100839 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4100839 00:06:26.417 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:26.417 [2024-12-11 09:44:35.990492] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:26.984 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:26.985 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4100839 00:06:26.985 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.552 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.552 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4100839 00:06:27.552 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.119 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.119 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4100839 00:06:28.119 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.378 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.378 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4100839 00:06:28.378 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.946 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.946 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4100839 00:06:28.946 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.514 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:29.514 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4100839 00:06:29.514 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.514 Initializing NVMe Controllers 00:06:29.514 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:29.514 Controller IO queue size 128, less than required. 00:06:29.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:29.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:29.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:29.514 Initialization complete. Launching workers. 00:06:29.514 ======================================================== 00:06:29.514 Latency(us) 00:06:29.514 Device Information : IOPS MiB/s Average min max 00:06:29.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002974.20 1000149.96 1041886.38 00:06:29.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003813.07 1000159.41 1011582.05 00:06:29.514 ======================================================== 00:06:29.514 Total : 256.00 0.12 1003393.64 1000149.96 1041886.38 00:06:29.514 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4100839 00:06:30.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4100839) - No such process 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4100839 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:30.082 rmmod nvme_tcp 00:06:30.082 rmmod nvme_fabrics 00:06:30.082 rmmod nvme_keyring 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 4100132 ']' 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 4100132 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 4100132 ']' 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 4100132 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4100132 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4100132' 00:06:30.082 killing process with pid 4100132 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 4100132 00:06:30.082 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 4100132 00:06:30.342 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:30.342 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:30.342 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:30.342 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:30.342 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:30.342 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:30.342 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:30.342 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:30.342 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:30.342 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.342 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:30.342 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.323 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:32.323 00:06:32.323 real 0m17.034s 00:06:32.323 user 0m29.447s 00:06:32.323 sys 0m6.052s 00:06:32.323 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.323 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.323 ************************************ 00:06:32.323 END TEST nvmf_delete_subsystem 00:06:32.323 ************************************ 00:06:32.323 09:44:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:32.323 09:44:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:32.323 09:44:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.323 09:44:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:32.323 ************************************ 00:06:32.323 START TEST nvmf_host_management 00:06:32.323 ************************************ 00:06:32.323 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:32.633 * Looking for test storage... 00:06:32.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.633 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:32.633 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:32.633 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:32.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.633 --rc genhtml_branch_coverage=1 00:06:32.633 --rc genhtml_function_coverage=1 00:06:32.633 --rc genhtml_legend=1 00:06:32.633 --rc geninfo_all_blocks=1 00:06:32.633 --rc geninfo_unexecuted_blocks=1 00:06:32.633 00:06:32.633 ' 00:06:32.633 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:32.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.633 --rc genhtml_branch_coverage=1 00:06:32.633 --rc genhtml_function_coverage=1 00:06:32.633 --rc genhtml_legend=1 00:06:32.634 --rc geninfo_all_blocks=1 00:06:32.634 --rc geninfo_unexecuted_blocks=1 00:06:32.634 00:06:32.634 ' 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:32.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.634 --rc genhtml_branch_coverage=1 00:06:32.634 --rc genhtml_function_coverage=1 00:06:32.634 --rc genhtml_legend=1 00:06:32.634 --rc geninfo_all_blocks=1 00:06:32.634 --rc geninfo_unexecuted_blocks=1 00:06:32.634 00:06:32.634 ' 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:32.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.634 --rc genhtml_branch_coverage=1 00:06:32.634 --rc genhtml_function_coverage=1 00:06:32.634 --rc genhtml_legend=1 00:06:32.634 --rc geninfo_all_blocks=1 00:06:32.634 --rc geninfo_unexecuted_blocks=1 00:06:32.634 00:06:32.634 ' 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:32.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:32.634 09:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:39.212 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:39.212 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:39.212 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:39.213 Found net devices under 0000:af:00.0: cvl_0_0 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:39.213 Found net devices under 0000:af:00.1: cvl_0_1 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:39.213 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:39.471 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:39.471 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:39.471 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:39.471 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:39.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:39.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:06:39.471 00:06:39.471 --- 10.0.0.2 ping statistics --- 00:06:39.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.471 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:06:39.471 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:39.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:39.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:06:39.471 00:06:39.471 --- 10.0.0.1 ping statistics --- 00:06:39.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.471 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:06:39.471 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:39.471 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:39.471 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:39.471 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:39.471 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:39.471 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:39.471 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:39.471 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:39.471 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:39.472 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:39.472 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:39.472 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:39.472 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:39.472 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.472 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.472 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=4105321 00:06:39.472 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 4105321 00:06:39.472 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:39.472 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4105321 ']' 00:06:39.472 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.472 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.472 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.472 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.472 09:44:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.472 [2024-12-11 09:44:49.042885] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:06:39.472 [2024-12-11 09:44:49.042936] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.730 [2024-12-11 09:44:49.128128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.730 [2024-12-11 09:44:49.170149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.730 [2024-12-11 09:44:49.170183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.730 [2024-12-11 09:44:49.170190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.730 [2024-12-11 09:44:49.170196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.730 [2024-12-11 09:44:49.170201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.730 [2024-12-11 09:44:49.171729] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.730 [2024-12-11 09:44:49.171818] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.730 [2024-12-11 09:44:49.171925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.730 [2024-12-11 09:44:49.171927] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:39.730 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.730 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:39.730 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:39.730 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.730 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.990 [2024-12-11 09:44:49.320737] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.990 Malloc0 00:06:39.990 [2024-12-11 09:44:49.390040] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4105581 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4105581 /var/tmp/bdevperf.sock 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4105581 ']' 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:39.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:39.990 { 00:06:39.990 "params": { 00:06:39.990 "name": "Nvme$subsystem", 00:06:39.990 "trtype": "$TEST_TRANSPORT", 00:06:39.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:39.990 "adrfam": "ipv4", 00:06:39.990 "trsvcid": "$NVMF_PORT", 00:06:39.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:39.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:39.990 "hdgst": ${hdgst:-false}, 00:06:39.990 "ddgst": ${ddgst:-false} 00:06:39.990 }, 00:06:39.990 "method": "bdev_nvme_attach_controller" 00:06:39.990 } 00:06:39.990 EOF 00:06:39.990 )") 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:39.990 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:39.990 "params": { 00:06:39.990 "name": "Nvme0", 00:06:39.990 "trtype": "tcp", 00:06:39.990 "traddr": "10.0.0.2", 00:06:39.990 "adrfam": "ipv4", 00:06:39.990 "trsvcid": "4420", 00:06:39.990 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:39.990 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:39.990 "hdgst": false, 00:06:39.990 "ddgst": false 00:06:39.990 }, 00:06:39.990 "method": "bdev_nvme_attach_controller" 00:06:39.990 }' 00:06:39.990 [2024-12-11 09:44:49.487031] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:06:39.990 [2024-12-11 09:44:49.487075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4105581 ] 00:06:40.249 [2024-12-11 09:44:49.565375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.249 [2024-12-11 09:44:49.604920] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.507 Running I/O for 10 seconds... 00:06:40.765 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.765 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:40.765 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:40.765 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.766 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.025 [2024-12-11 09:44:50.413404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134810 is same with the state(6) to be set 00:06:41.025 [2024-12-11 09:44:50.413451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134810 is same with the state(6) to be set 00:06:41.025 [2024-12-11 09:44:50.413459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134810 is same with the state(6) to be set 00:06:41.025 [2024-12-11 09:44:50.413467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134810 is same with the state(6) to be set 00:06:41.025 [2024-12-11 09:44:50.413473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134810 is same with the state(6) to be set 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.025 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.025 [2024-12-11 09:44:50.420745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.025 [2024-12-11 09:44:50.420776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.025 [2024-12-11 09:44:50.420786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.025 [2024-12-11 09:44:50.420793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.420801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.026 [2024-12-11 09:44:50.420808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.420815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.026 [2024-12-11 09:44:50.420822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.420828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x253db20 is same with the state(6) to be set 00:06:41.026 [2024-12-11 09:44:50.420863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.420872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.420885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.420892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.420900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.420907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.420915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.420921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.420929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.420936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.420943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.420950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.420958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.420965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.420973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.420979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.420987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.420998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.026 [2024-12-11 09:44:50.421417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.026 [2024-12-11 09:44:50.421425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.421815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.027 [2024-12-11 09:44:50.421821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.027 [2024-12-11 09:44:50.422745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:41.027 task offset: 122880 on job bdev=Nvme0n1 fails 00:06:41.027 00:06:41.027 Latency(us) 00:06:41.027 [2024-12-11T08:44:50.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:41.027 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:41.027 Job: Nvme0n1 ended in about 0.49 seconds with error 00:06:41.027 Verification LBA range: start 0x0 length 0x400 00:06:41.027 Nvme0n1 : 0.49 1950.76 121.92 130.05 0.00 30044.00 1583.79 26838.55 00:06:41.027 [2024-12-11T08:44:50.602Z] =================================================================================================================== 00:06:41.027 [2024-12-11T08:44:50.602Z] Total : 1950.76 121.92 130.05 0.00 30044.00 1583.79 26838.55 00:06:41.027 [2024-12-11 09:44:50.425153] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:41.027 [2024-12-11 09:44:50.425173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x253db20 (9): Bad file descriptor 00:06:41.027 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.027 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:41.027 [2024-12-11 09:44:50.519369] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:41.968 09:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4105581 00:06:41.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4105581) - No such process 00:06:41.968 09:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:41.968 09:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:41.968 09:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:41.968 09:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:41.968 09:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:41.968 09:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:41.968 09:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:41.968 09:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:41.968 { 00:06:41.968 "params": { 00:06:41.968 "name": "Nvme$subsystem", 00:06:41.968 "trtype": "$TEST_TRANSPORT", 00:06:41.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:41.968 "adrfam": "ipv4", 00:06:41.968 "trsvcid": "$NVMF_PORT", 00:06:41.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:41.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:41.968 "hdgst": ${hdgst:-false}, 00:06:41.968 "ddgst": ${ddgst:-false} 00:06:41.968 }, 00:06:41.968 "method": "bdev_nvme_attach_controller" 00:06:41.968 } 00:06:41.968 EOF 00:06:41.968 )") 00:06:41.968 09:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:41.968 09:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:41.968 09:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:41.968 09:44:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:41.968 "params": { 00:06:41.968 "name": "Nvme0", 00:06:41.968 "trtype": "tcp", 00:06:41.968 "traddr": "10.0.0.2", 00:06:41.968 "adrfam": "ipv4", 00:06:41.968 "trsvcid": "4420", 00:06:41.968 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:41.968 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:41.968 "hdgst": false, 00:06:41.968 "ddgst": false 00:06:41.968 }, 00:06:41.968 "method": "bdev_nvme_attach_controller" 00:06:41.968 }' 00:06:41.968 [2024-12-11 09:44:51.483156] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:06:41.968 [2024-12-11 09:44:51.483200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4105830 ] 00:06:42.227 [2024-12-11 09:44:51.562765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.227 [2024-12-11 09:44:51.602231] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.485 Running I/O for 1 seconds... 00:06:43.420 2048.00 IOPS, 128.00 MiB/s 00:06:43.421 Latency(us) 00:06:43.421 [2024-12-11T08:44:52.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:43.421 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:43.421 Verification LBA range: start 0x0 length 0x400 00:06:43.421 Nvme0n1 : 1.02 2061.92 128.87 0.00 0.00 30558.55 4837.18 26838.55 00:06:43.421 [2024-12-11T08:44:52.996Z] =================================================================================================================== 00:06:43.421 [2024-12-11T08:44:52.996Z] Total : 2061.92 128.87 0.00 0.00 30558.55 4837.18 26838.55 00:06:43.421 09:44:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:43.421 09:44:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:43.421 09:44:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:43.421 09:44:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:43.679 09:44:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:43.679 09:44:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:43.679 09:44:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:43.679 09:44:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:43.679 09:44:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:43.679 09:44:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:43.679 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:43.679 rmmod nvme_tcp 00:06:43.679 rmmod nvme_fabrics 00:06:43.679 rmmod nvme_keyring 00:06:43.679 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:43.679 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:43.679 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:43.679 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 4105321 ']' 00:06:43.679 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 4105321 00:06:43.679 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 4105321 ']' 00:06:43.679 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 4105321 00:06:43.679 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:43.679 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.679 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4105321 00:06:43.679 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:43.679 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:43.679 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4105321' 00:06:43.679 killing process with pid 4105321 00:06:43.679 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 4105321 00:06:43.679 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 4105321 00:06:43.939 [2024-12-11 09:44:53.269275] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:43.939 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:43.939 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:43.939 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:43.939 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:43.939 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:43.939 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:43.939 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:43.939 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:43.939 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:43.939 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.939 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.939 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.843 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:45.843 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:45.843 00:06:45.843 real 0m13.491s 00:06:45.843 user 0m20.948s 00:06:45.843 sys 0m6.300s 00:06:45.843 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.843 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:45.843 ************************************ 00:06:45.843 END TEST nvmf_host_management 00:06:45.843 ************************************ 00:06:45.843 09:44:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:45.843 09:44:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:45.843 09:44:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.843 09:44:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:46.103 ************************************ 00:06:46.103 START TEST nvmf_lvol 00:06:46.103 ************************************ 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:46.103 * Looking for test storage... 00:06:46.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.103 --rc genhtml_branch_coverage=1 00:06:46.103 --rc genhtml_function_coverage=1 00:06:46.103 --rc genhtml_legend=1 00:06:46.103 --rc geninfo_all_blocks=1 00:06:46.103 --rc geninfo_unexecuted_blocks=1 00:06:46.103 00:06:46.103 ' 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.103 --rc genhtml_branch_coverage=1 00:06:46.103 --rc genhtml_function_coverage=1 00:06:46.103 --rc genhtml_legend=1 00:06:46.103 --rc geninfo_all_blocks=1 00:06:46.103 --rc geninfo_unexecuted_blocks=1 00:06:46.103 00:06:46.103 ' 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.103 --rc genhtml_branch_coverage=1 00:06:46.103 --rc genhtml_function_coverage=1 00:06:46.103 --rc genhtml_legend=1 00:06:46.103 --rc geninfo_all_blocks=1 00:06:46.103 --rc geninfo_unexecuted_blocks=1 00:06:46.103 00:06:46.103 ' 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.103 --rc genhtml_branch_coverage=1 00:06:46.103 --rc genhtml_function_coverage=1 00:06:46.103 --rc genhtml_legend=1 00:06:46.103 --rc geninfo_all_blocks=1 00:06:46.103 --rc geninfo_unexecuted_blocks=1 00:06:46.103 00:06:46.103 ' 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.103 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:46.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:46.104 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:52.676 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:52.676 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:52.676 Found net devices under 0000:af:00.0: cvl_0_0 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:52.676 Found net devices under 0000:af:00.1: cvl_0_1 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:52.676 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:52.936 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:52.936 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:52.936 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:52.936 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:52.936 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:52.936 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:52.936 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:52.936 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:52.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:52.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:06:52.936 00:06:52.936 --- 10.0.0.2 ping statistics --- 00:06:52.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.936 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:06:52.936 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:52.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:52.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:06:52.936 00:06:52.936 --- 10.0.0.1 ping statistics --- 00:06:52.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.936 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:06:52.936 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:52.936 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:52.936 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:52.936 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:52.936 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:52.936 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:52.936 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:52.936 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:52.936 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:53.195 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:53.195 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:53.195 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.195 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:53.195 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=4110206 00:06:53.195 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:53.195 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 4110206 00:06:53.195 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 4110206 ']' 00:06:53.195 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.195 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.195 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.195 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.195 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:53.195 [2024-12-11 09:45:02.577282] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:06:53.195 [2024-12-11 09:45:02.577331] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.195 [2024-12-11 09:45:02.662854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.195 [2024-12-11 09:45:02.703336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.195 [2024-12-11 09:45:02.703372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.195 [2024-12-11 09:45:02.703378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.195 [2024-12-11 09:45:02.703384] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.195 [2024-12-11 09:45:02.703389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.195 [2024-12-11 09:45:02.704615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.195 [2024-12-11 09:45:02.704722] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.195 [2024-12-11 09:45:02.704723] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.454 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.454 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:53.454 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:53.454 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:53.454 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:53.454 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:53.454 09:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:53.454 [2024-12-11 09:45:03.018095] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.713 09:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:53.971 09:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:53.971 09:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:53.971 09:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:53.971 09:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:54.230 09:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:54.489 09:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=54ceaff8-f88d-4ec5-9d20-27e78fd97999 00:06:54.489 09:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 54ceaff8-f88d-4ec5-9d20-27e78fd97999 lvol 20 00:06:54.747 09:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c349d4e4-04b6-46a2-807f-6479af271391 00:06:54.747 09:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:55.005 09:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c349d4e4-04b6-46a2-807f-6479af271391 00:06:55.005 09:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:55.264 [2024-12-11 09:45:04.722851] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.264 09:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:55.522 09:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4110689 00:06:55.522 09:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:55.522 09:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:56.456 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c349d4e4-04b6-46a2-807f-6479af271391 MY_SNAPSHOT 00:06:56.715 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=81140f3a-2b5a-42b5-9ad0-d6ade0da76c7 00:06:56.715 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c349d4e4-04b6-46a2-807f-6479af271391 30 00:06:56.974 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 81140f3a-2b5a-42b5-9ad0-d6ade0da76c7 MY_CLONE 00:06:57.232 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c6caf8fb-cc44-4640-8dc1-a359cfbd1fd0 00:06:57.232 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c6caf8fb-cc44-4640-8dc1-a359cfbd1fd0 00:06:57.799 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4110689 00:07:05.916 Initializing NVMe Controllers 00:07:05.916 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:05.916 Controller IO queue size 128, less than required. 00:07:05.916 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:05.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:05.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:05.916 Initialization complete. Launching workers. 00:07:05.916 ======================================================== 00:07:05.916 Latency(us) 00:07:05.916 Device Information : IOPS MiB/s Average min max 00:07:05.916 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11840.40 46.25 10817.38 1591.01 65347.75 00:07:05.916 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12005.40 46.90 10668.58 3550.63 63784.73 00:07:05.916 ======================================================== 00:07:05.916 Total : 23845.80 93.15 10742.47 1591.01 65347.75 00:07:05.916 00:07:05.916 09:45:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:06.175 09:45:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c349d4e4-04b6-46a2-807f-6479af271391 00:07:06.433 09:45:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 54ceaff8-f88d-4ec5-9d20-27e78fd97999 00:07:06.433 09:45:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:06.433 09:45:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:06.433 09:45:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:06.433 09:45:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:06.433 09:45:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:06.433 09:45:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:06.433 09:45:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:06.433 09:45:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:06.433 09:45:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:06.433 rmmod nvme_tcp 00:07:06.692 rmmod nvme_fabrics 00:07:06.692 rmmod nvme_keyring 00:07:06.692 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:06.692 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:06.692 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:06.692 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 4110206 ']' 00:07:06.692 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 4110206 00:07:06.692 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 4110206 ']' 00:07:06.692 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 4110206 00:07:06.692 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:06.692 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.692 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4110206 00:07:06.692 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.692 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.692 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4110206' 00:07:06.692 killing process with pid 4110206 00:07:06.692 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 4110206 00:07:06.692 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 4110206 00:07:06.952 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:06.952 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:06.952 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:06.952 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:06.952 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:06.952 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:06.952 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:06.952 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:06.952 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:06.952 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.952 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:06.952 09:45:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.856 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:08.856 00:07:08.856 real 0m22.962s 00:07:08.856 user 1m3.889s 00:07:08.856 sys 0m8.278s 00:07:08.856 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.856 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:08.856 ************************************ 00:07:08.856 END TEST nvmf_lvol 00:07:08.856 ************************************ 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:09.115 ************************************ 00:07:09.115 START TEST nvmf_lvs_grow 00:07:09.115 ************************************ 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:09.115 * Looking for test storage... 00:07:09.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.115 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:09.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.115 --rc genhtml_branch_coverage=1 00:07:09.115 --rc genhtml_function_coverage=1 00:07:09.116 --rc genhtml_legend=1 00:07:09.116 --rc geninfo_all_blocks=1 00:07:09.116 --rc geninfo_unexecuted_blocks=1 00:07:09.116 00:07:09.116 ' 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:09.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.116 --rc genhtml_branch_coverage=1 00:07:09.116 --rc genhtml_function_coverage=1 00:07:09.116 --rc genhtml_legend=1 00:07:09.116 --rc geninfo_all_blocks=1 00:07:09.116 --rc geninfo_unexecuted_blocks=1 00:07:09.116 00:07:09.116 ' 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:09.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.116 --rc genhtml_branch_coverage=1 00:07:09.116 --rc genhtml_function_coverage=1 00:07:09.116 --rc genhtml_legend=1 00:07:09.116 --rc geninfo_all_blocks=1 00:07:09.116 --rc geninfo_unexecuted_blocks=1 00:07:09.116 00:07:09.116 ' 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:09.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.116 --rc genhtml_branch_coverage=1 00:07:09.116 --rc genhtml_function_coverage=1 00:07:09.116 --rc genhtml_legend=1 00:07:09.116 --rc geninfo_all_blocks=1 00:07:09.116 --rc geninfo_unexecuted_blocks=1 00:07:09.116 00:07:09.116 ' 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:09.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.116 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.375 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:09.375 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:09.375 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:09.375 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:15.943 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:15.944 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:15.944 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:15.944 Found net devices under 0000:af:00.0: cvl_0_0 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:15.944 Found net devices under 0000:af:00.1: cvl_0_1 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:15.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:07:15.944 00:07:15.944 --- 10.0.0.2 ping statistics --- 00:07:15.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.944 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:07:15.944 00:07:15.944 --- 10.0.0.1 ping statistics --- 00:07:15.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.944 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.944 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:16.203 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=4116905 00:07:16.203 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:16.203 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 4116905 00:07:16.203 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 4116905 ']' 00:07:16.203 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.203 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.203 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.203 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.203 09:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:16.203 [2024-12-11 09:45:25.567764] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:07:16.203 [2024-12-11 09:45:25.567807] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.203 [2024-12-11 09:45:25.652786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.203 [2024-12-11 09:45:25.692183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.203 [2024-12-11 09:45:25.692222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.203 [2024-12-11 09:45:25.692230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.203 [2024-12-11 09:45:25.692236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.203 [2024-12-11 09:45:25.692257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.203 [2024-12-11 09:45:25.692804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.138 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.138 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:17.138 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:17.138 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:17.138 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:17.138 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.138 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:17.138 [2024-12-11 09:45:26.602003] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.138 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:17.138 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.138 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.138 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:17.138 ************************************ 00:07:17.138 START TEST lvs_grow_clean 00:07:17.138 ************************************ 00:07:17.138 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:17.138 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:17.139 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:17.139 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:17.139 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:17.139 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:17.139 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:17.139 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:17.139 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:17.139 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:17.396 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:17.396 09:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:17.655 09:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=dd4e5fcd-78e9-4390-b714-c4b0055d67b0 00:07:17.655 09:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd4e5fcd-78e9-4390-b714-c4b0055d67b0 00:07:17.655 09:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:17.914 09:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:17.914 09:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:17.914 09:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dd4e5fcd-78e9-4390-b714-c4b0055d67b0 lvol 150 00:07:17.914 09:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7d33d0a4-4dc4-450b-9e64-75e89cc28284 00:07:17.914 09:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:17.914 09:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:18.172 [2024-12-11 09:45:27.651165] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:18.172 [2024-12-11 09:45:27.651216] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:18.172 true 00:07:18.172 09:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd4e5fcd-78e9-4390-b714-c4b0055d67b0 00:07:18.172 09:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:18.431 09:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:18.431 09:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:18.690 09:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7d33d0a4-4dc4-450b-9e64-75e89cc28284 00:07:18.690 09:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:18.949 [2024-12-11 09:45:28.401424] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.949 09:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:19.208 09:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4117413 00:07:19.208 09:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:19.208 09:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:19.208 09:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4117413 /var/tmp/bdevperf.sock 00:07:19.208 09:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 4117413 ']' 00:07:19.208 09:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:19.208 09:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.208 09:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:19.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:19.208 09:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.208 09:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:19.208 [2024-12-11 09:45:28.636953] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:07:19.208 [2024-12-11 09:45:28.636999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4117413 ] 00:07:19.208 [2024-12-11 09:45:28.716489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.208 [2024-12-11 09:45:28.756692] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.144 09:45:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.144 09:45:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:20.144 09:45:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:20.403 Nvme0n1 00:07:20.403 09:45:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:20.403 [ 00:07:20.403 { 00:07:20.403 "name": "Nvme0n1", 00:07:20.403 "aliases": [ 00:07:20.403 "7d33d0a4-4dc4-450b-9e64-75e89cc28284" 00:07:20.403 ], 00:07:20.403 "product_name": "NVMe disk", 00:07:20.403 "block_size": 4096, 00:07:20.403 "num_blocks": 38912, 00:07:20.403 "uuid": "7d33d0a4-4dc4-450b-9e64-75e89cc28284", 00:07:20.403 "numa_id": 1, 00:07:20.403 "assigned_rate_limits": { 00:07:20.403 "rw_ios_per_sec": 0, 00:07:20.403 "rw_mbytes_per_sec": 0, 00:07:20.403 "r_mbytes_per_sec": 0, 00:07:20.403 "w_mbytes_per_sec": 0 00:07:20.403 }, 00:07:20.403 "claimed": false, 00:07:20.403 "zoned": false, 00:07:20.403 "supported_io_types": { 00:07:20.403 "read": true, 00:07:20.403 "write": true, 00:07:20.403 "unmap": true, 00:07:20.403 "flush": true, 00:07:20.403 "reset": true, 00:07:20.403 "nvme_admin": true, 00:07:20.403 "nvme_io": true, 00:07:20.403 "nvme_io_md": false, 00:07:20.403 "write_zeroes": true, 00:07:20.403 "zcopy": false, 00:07:20.403 "get_zone_info": false, 00:07:20.403 "zone_management": false, 00:07:20.403 "zone_append": false, 00:07:20.403 "compare": true, 00:07:20.403 "compare_and_write": true, 00:07:20.403 "abort": true, 00:07:20.403 "seek_hole": false, 00:07:20.403 "seek_data": false, 00:07:20.403 "copy": true, 00:07:20.403 "nvme_iov_md": false 00:07:20.403 }, 00:07:20.403 "memory_domains": [ 00:07:20.403 { 00:07:20.403 "dma_device_id": "system", 00:07:20.403 "dma_device_type": 1 00:07:20.403 } 00:07:20.403 ], 00:07:20.403 "driver_specific": { 00:07:20.403 "nvme": [ 00:07:20.403 { 00:07:20.403 "trid": { 00:07:20.403 "trtype": "TCP", 00:07:20.403 "adrfam": "IPv4", 00:07:20.403 "traddr": "10.0.0.2", 00:07:20.403 "trsvcid": "4420", 00:07:20.403 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:20.403 }, 00:07:20.403 "ctrlr_data": { 00:07:20.403 "cntlid": 1, 00:07:20.403 "vendor_id": "0x8086", 00:07:20.403 "model_number": "SPDK bdev Controller", 00:07:20.403 "serial_number": "SPDK0", 00:07:20.403 "firmware_revision": "25.01", 00:07:20.403 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:20.403 "oacs": { 00:07:20.403 "security": 0, 00:07:20.403 "format": 0, 00:07:20.403 "firmware": 0, 00:07:20.403 "ns_manage": 0 00:07:20.403 }, 00:07:20.403 "multi_ctrlr": true, 00:07:20.403 "ana_reporting": false 00:07:20.403 }, 00:07:20.403 "vs": { 00:07:20.403 "nvme_version": "1.3" 00:07:20.403 }, 00:07:20.403 "ns_data": { 00:07:20.403 "id": 1, 00:07:20.403 "can_share": true 00:07:20.403 } 00:07:20.403 } 00:07:20.403 ], 00:07:20.403 "mp_policy": "active_passive" 00:07:20.403 } 00:07:20.403 } 00:07:20.403 ] 00:07:20.403 09:45:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4117645 00:07:20.403 09:45:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:20.403 09:45:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:20.662 Running I/O for 10 seconds... 00:07:21.598 Latency(us) 00:07:21.598 [2024-12-11T08:45:31.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.598 Nvme0n1 : 1.00 23567.00 92.06 0.00 0.00 0.00 0.00 0.00 00:07:21.598 [2024-12-11T08:45:31.173Z] =================================================================================================================== 00:07:21.598 [2024-12-11T08:45:31.173Z] Total : 23567.00 92.06 0.00 0.00 0.00 0.00 0.00 00:07:21.598 00:07:22.534 09:45:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dd4e5fcd-78e9-4390-b714-c4b0055d67b0 00:07:22.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.534 Nvme0n1 : 2.00 23713.00 92.63 0.00 0.00 0.00 0.00 0.00 00:07:22.535 [2024-12-11T08:45:32.110Z] =================================================================================================================== 00:07:22.535 [2024-12-11T08:45:32.110Z] Total : 23713.00 92.63 0.00 0.00 0.00 0.00 0.00 00:07:22.535 00:07:22.793 true 00:07:22.793 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd4e5fcd-78e9-4390-b714-c4b0055d67b0 00:07:22.793 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:23.052 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:23.052 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:23.052 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4117645 00:07:23.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.619 Nvme0n1 : 3.00 23743.67 92.75 0.00 0.00 0.00 0.00 0.00 00:07:23.619 [2024-12-11T08:45:33.194Z] =================================================================================================================== 00:07:23.619 [2024-12-11T08:45:33.194Z] Total : 23743.67 92.75 0.00 0.00 0.00 0.00 0.00 00:07:23.619 00:07:24.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.555 Nvme0n1 : 4.00 23798.75 92.96 0.00 0.00 0.00 0.00 0.00 00:07:24.555 [2024-12-11T08:45:34.130Z] =================================================================================================================== 00:07:24.555 [2024-12-11T08:45:34.130Z] Total : 23798.75 92.96 0.00 0.00 0.00 0.00 0.00 00:07:24.555 00:07:25.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.932 Nvme0n1 : 5.00 23849.00 93.16 0.00 0.00 0.00 0.00 0.00 00:07:25.932 [2024-12-11T08:45:35.507Z] =================================================================================================================== 00:07:25.932 [2024-12-11T08:45:35.507Z] Total : 23849.00 93.16 0.00 0.00 0.00 0.00 0.00 00:07:25.932 00:07:26.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.868 Nvme0n1 : 6.00 23883.83 93.30 0.00 0.00 0.00 0.00 0.00 00:07:26.868 [2024-12-11T08:45:36.443Z] =================================================================================================================== 00:07:26.868 [2024-12-11T08:45:36.443Z] Total : 23883.83 93.30 0.00 0.00 0.00 0.00 0.00 00:07:26.868 00:07:27.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.804 Nvme0n1 : 7.00 23919.43 93.44 0.00 0.00 0.00 0.00 0.00 00:07:27.804 [2024-12-11T08:45:37.379Z] =================================================================================================================== 00:07:27.804 [2024-12-11T08:45:37.379Z] Total : 23919.43 93.44 0.00 0.00 0.00 0.00 0.00 00:07:27.804 00:07:28.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.891 Nvme0n1 : 8.00 23924.88 93.46 0.00 0.00 0.00 0.00 0.00 00:07:28.891 [2024-12-11T08:45:38.466Z] =================================================================================================================== 00:07:28.891 [2024-12-11T08:45:38.466Z] Total : 23924.88 93.46 0.00 0.00 0.00 0.00 0.00 00:07:28.891 00:07:29.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.828 Nvme0n1 : 9.00 23941.11 93.52 0.00 0.00 0.00 0.00 0.00 00:07:29.828 [2024-12-11T08:45:39.403Z] =================================================================================================================== 00:07:29.828 [2024-12-11T08:45:39.403Z] Total : 23941.11 93.52 0.00 0.00 0.00 0.00 0.00 00:07:29.828 00:07:30.763 00:07:30.763 Latency(us) 00:07:30.763 [2024-12-11T08:45:40.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.764 Nvme0n1 : 10.00 23935.63 93.50 0.00 0.00 5344.37 3167.57 10173.68 00:07:30.764 [2024-12-11T08:45:40.339Z] =================================================================================================================== 00:07:30.764 [2024-12-11T08:45:40.339Z] Total : 23935.63 93.50 0.00 0.00 5344.37 3167.57 10173.68 00:07:30.764 { 00:07:30.764 "results": [ 00:07:30.764 { 00:07:30.764 "job": "Nvme0n1", 00:07:30.764 "core_mask": "0x2", 00:07:30.764 "workload": "randwrite", 00:07:30.764 "status": "finished", 00:07:30.764 "queue_depth": 128, 00:07:30.764 "io_size": 4096, 00:07:30.764 "runtime": 10.001991, 00:07:30.764 "iops": 23935.634415187935, 00:07:30.764 "mibps": 93.49857193432787, 00:07:30.764 "io_failed": 0, 00:07:30.764 "io_timeout": 0, 00:07:30.764 "avg_latency_us": 5344.3712757792955, 00:07:30.764 "min_latency_us": 3167.5733333333333, 00:07:30.764 "max_latency_us": 10173.683809523809 00:07:30.764 } 00:07:30.764 ], 00:07:30.764 "core_count": 1 00:07:30.764 } 00:07:30.764 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4117413 00:07:30.764 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 4117413 ']' 00:07:30.764 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 4117413 00:07:30.764 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:30.764 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.764 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4117413 00:07:30.764 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:30.764 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:30.764 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4117413' 00:07:30.764 killing process with pid 4117413 00:07:30.764 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 4117413 00:07:30.764 Received shutdown signal, test time was about 10.000000 seconds 00:07:30.764 00:07:30.764 Latency(us) 00:07:30.764 [2024-12-11T08:45:40.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.764 [2024-12-11T08:45:40.339Z] =================================================================================================================== 00:07:30.764 [2024-12-11T08:45:40.339Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:30.764 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 4117413 00:07:30.764 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:31.022 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:31.281 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd4e5fcd-78e9-4390-b714-c4b0055d67b0 00:07:31.281 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:31.541 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:31.541 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:31.541 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:31.541 [2024-12-11 09:45:41.060823] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:31.541 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd4e5fcd-78e9-4390-b714-c4b0055d67b0 00:07:31.541 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:31.541 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd4e5fcd-78e9-4390-b714-c4b0055d67b0 00:07:31.541 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.541 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.541 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.541 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.541 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.541 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.541 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.541 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:31.541 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd4e5fcd-78e9-4390-b714-c4b0055d67b0 00:07:31.799 request: 00:07:31.799 { 00:07:31.799 "uuid": "dd4e5fcd-78e9-4390-b714-c4b0055d67b0", 00:07:31.799 "method": "bdev_lvol_get_lvstores", 00:07:31.799 "req_id": 1 00:07:31.799 } 00:07:31.799 Got JSON-RPC error response 00:07:31.800 response: 00:07:31.800 { 00:07:31.800 "code": -19, 00:07:31.800 "message": "No such device" 00:07:31.800 } 00:07:31.800 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:31.800 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:31.800 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:31.800 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:31.800 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:32.058 aio_bdev 00:07:32.058 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7d33d0a4-4dc4-450b-9e64-75e89cc28284 00:07:32.058 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=7d33d0a4-4dc4-450b-9e64-75e89cc28284 00:07:32.058 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:32.058 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:32.058 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:32.058 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:32.058 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:32.318 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7d33d0a4-4dc4-450b-9e64-75e89cc28284 -t 2000 00:07:32.318 [ 00:07:32.318 { 00:07:32.318 "name": "7d33d0a4-4dc4-450b-9e64-75e89cc28284", 00:07:32.318 "aliases": [ 00:07:32.318 "lvs/lvol" 00:07:32.318 ], 00:07:32.318 "product_name": "Logical Volume", 00:07:32.318 "block_size": 4096, 00:07:32.318 "num_blocks": 38912, 00:07:32.318 "uuid": "7d33d0a4-4dc4-450b-9e64-75e89cc28284", 00:07:32.318 "assigned_rate_limits": { 00:07:32.318 "rw_ios_per_sec": 0, 00:07:32.318 "rw_mbytes_per_sec": 0, 00:07:32.318 "r_mbytes_per_sec": 0, 00:07:32.318 "w_mbytes_per_sec": 0 00:07:32.318 }, 00:07:32.318 "claimed": false, 00:07:32.318 "zoned": false, 00:07:32.318 "supported_io_types": { 00:07:32.318 "read": true, 00:07:32.318 "write": true, 00:07:32.318 "unmap": true, 00:07:32.318 "flush": false, 00:07:32.318 "reset": true, 00:07:32.318 "nvme_admin": false, 00:07:32.318 "nvme_io": false, 00:07:32.318 "nvme_io_md": false, 00:07:32.318 "write_zeroes": true, 00:07:32.318 "zcopy": false, 00:07:32.318 "get_zone_info": false, 00:07:32.318 "zone_management": false, 00:07:32.318 "zone_append": false, 00:07:32.318 "compare": false, 00:07:32.318 "compare_and_write": false, 00:07:32.318 "abort": false, 00:07:32.318 "seek_hole": true, 00:07:32.318 "seek_data": true, 00:07:32.318 "copy": false, 00:07:32.318 "nvme_iov_md": false 00:07:32.318 }, 00:07:32.318 "driver_specific": { 00:07:32.318 "lvol": { 00:07:32.318 "lvol_store_uuid": "dd4e5fcd-78e9-4390-b714-c4b0055d67b0", 00:07:32.318 "base_bdev": "aio_bdev", 00:07:32.318 "thin_provision": false, 00:07:32.318 "num_allocated_clusters": 38, 00:07:32.318 "snapshot": false, 00:07:32.318 "clone": false, 00:07:32.318 "esnap_clone": false 00:07:32.318 } 00:07:32.318 } 00:07:32.318 } 00:07:32.318 ] 00:07:32.318 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:32.318 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd4e5fcd-78e9-4390-b714-c4b0055d67b0 00:07:32.318 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:32.577 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:32.577 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd4e5fcd-78e9-4390-b714-c4b0055d67b0 00:07:32.577 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:32.836 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:32.836 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7d33d0a4-4dc4-450b-9e64-75e89cc28284 00:07:32.836 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dd4e5fcd-78e9-4390-b714-c4b0055d67b0 00:07:33.094 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:33.354 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:33.354 00:07:33.354 real 0m16.138s 00:07:33.354 user 0m15.858s 00:07:33.354 sys 0m1.482s 00:07:33.354 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.354 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:33.354 ************************************ 00:07:33.354 END TEST lvs_grow_clean 00:07:33.354 ************************************ 00:07:33.354 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:33.354 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:33.354 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.354 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:33.354 ************************************ 00:07:33.354 START TEST lvs_grow_dirty 00:07:33.354 ************************************ 00:07:33.354 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:33.354 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:33.354 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:33.354 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:33.354 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:33.354 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:33.354 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:33.354 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:33.354 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:33.354 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:33.613 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:33.613 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:33.872 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d24a01f2-8fae-4b70-bd50-fbc8be2c5f9b 00:07:33.872 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d24a01f2-8fae-4b70-bd50-fbc8be2c5f9b 00:07:33.872 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:34.131 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:34.131 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:34.131 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d24a01f2-8fae-4b70-bd50-fbc8be2c5f9b lvol 150 00:07:34.131 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=31b57730-3ca4-4334-9acf-3b24a22941f1 00:07:34.131 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:34.131 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:34.389 [2024-12-11 09:45:43.857185] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:34.389 [2024-12-11 09:45:43.857257] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:34.389 true 00:07:34.389 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d24a01f2-8fae-4b70-bd50-fbc8be2c5f9b 00:07:34.389 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:34.647 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:34.647 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:34.905 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 31b57730-3ca4-4334-9acf-3b24a22941f1 00:07:34.905 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:35.165 [2024-12-11 09:45:44.627468] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.165 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:35.424 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:35.424 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4120222 00:07:35.424 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:35.425 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4120222 /var/tmp/bdevperf.sock 00:07:35.425 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4120222 ']' 00:07:35.425 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:35.425 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.425 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:35.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:35.425 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.425 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:35.425 [2024-12-11 09:45:44.858511] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:07:35.425 [2024-12-11 09:45:44.858556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4120222 ] 00:07:35.425 [2024-12-11 09:45:44.935890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.425 [2024-12-11 09:45:44.974791] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.689 09:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.689 09:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:35.689 09:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:35.947 Nvme0n1 00:07:35.947 09:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:36.205 [ 00:07:36.205 { 00:07:36.205 "name": "Nvme0n1", 00:07:36.205 "aliases": [ 00:07:36.205 "31b57730-3ca4-4334-9acf-3b24a22941f1" 00:07:36.205 ], 00:07:36.205 "product_name": "NVMe disk", 00:07:36.205 "block_size": 4096, 00:07:36.205 "num_blocks": 38912, 00:07:36.205 "uuid": "31b57730-3ca4-4334-9acf-3b24a22941f1", 00:07:36.205 "numa_id": 1, 00:07:36.205 "assigned_rate_limits": { 00:07:36.205 "rw_ios_per_sec": 0, 00:07:36.205 "rw_mbytes_per_sec": 0, 00:07:36.205 "r_mbytes_per_sec": 0, 00:07:36.205 "w_mbytes_per_sec": 0 00:07:36.205 }, 00:07:36.205 "claimed": false, 00:07:36.205 "zoned": false, 00:07:36.205 "supported_io_types": { 00:07:36.205 "read": true, 00:07:36.205 "write": true, 00:07:36.205 "unmap": true, 00:07:36.205 "flush": true, 00:07:36.205 "reset": true, 00:07:36.205 "nvme_admin": true, 00:07:36.205 "nvme_io": true, 00:07:36.205 "nvme_io_md": false, 00:07:36.205 "write_zeroes": true, 00:07:36.205 "zcopy": false, 00:07:36.205 "get_zone_info": false, 00:07:36.205 "zone_management": false, 00:07:36.205 "zone_append": false, 00:07:36.205 "compare": true, 00:07:36.205 "compare_and_write": true, 00:07:36.205 "abort": true, 00:07:36.205 "seek_hole": false, 00:07:36.205 "seek_data": false, 00:07:36.205 "copy": true, 00:07:36.205 "nvme_iov_md": false 00:07:36.205 }, 00:07:36.205 "memory_domains": [ 00:07:36.205 { 00:07:36.205 "dma_device_id": "system", 00:07:36.205 "dma_device_type": 1 00:07:36.205 } 00:07:36.205 ], 00:07:36.205 "driver_specific": { 00:07:36.205 "nvme": [ 00:07:36.205 { 00:07:36.205 "trid": { 00:07:36.205 "trtype": "TCP", 00:07:36.205 "adrfam": "IPv4", 00:07:36.205 "traddr": "10.0.0.2", 00:07:36.205 "trsvcid": "4420", 00:07:36.205 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:36.205 }, 00:07:36.205 "ctrlr_data": { 00:07:36.205 "cntlid": 1, 00:07:36.205 "vendor_id": "0x8086", 00:07:36.205 "model_number": "SPDK bdev Controller", 00:07:36.205 "serial_number": "SPDK0", 00:07:36.205 "firmware_revision": "25.01", 00:07:36.205 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:36.205 "oacs": { 00:07:36.205 "security": 0, 00:07:36.205 "format": 0, 00:07:36.205 "firmware": 0, 00:07:36.205 "ns_manage": 0 00:07:36.205 }, 00:07:36.205 "multi_ctrlr": true, 00:07:36.205 "ana_reporting": false 00:07:36.205 }, 00:07:36.205 "vs": { 00:07:36.205 "nvme_version": "1.3" 00:07:36.205 }, 00:07:36.205 "ns_data": { 00:07:36.205 "id": 1, 00:07:36.205 "can_share": true 00:07:36.205 } 00:07:36.205 } 00:07:36.205 ], 00:07:36.205 "mp_policy": "active_passive" 00:07:36.205 } 00:07:36.206 } 00:07:36.206 ] 00:07:36.206 09:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4120241 00:07:36.206 09:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:36.206 09:45:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:36.206 Running I/O for 10 seconds... 00:07:37.581 Latency(us) 00:07:37.581 [2024-12-11T08:45:47.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.581 Nvme0n1 : 1.00 23636.00 92.33 0.00 0.00 0.00 0.00 0.00 00:07:37.581 [2024-12-11T08:45:47.156Z] =================================================================================================================== 00:07:37.581 [2024-12-11T08:45:47.156Z] Total : 23636.00 92.33 0.00 0.00 0.00 0.00 0.00 00:07:37.581 00:07:38.148 09:45:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d24a01f2-8fae-4b70-bd50-fbc8be2c5f9b 00:07:38.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.406 Nvme0n1 : 2.00 23730.50 92.70 0.00 0.00 0.00 0.00 0.00 00:07:38.406 [2024-12-11T08:45:47.981Z] =================================================================================================================== 00:07:38.406 [2024-12-11T08:45:47.981Z] Total : 23730.50 92.70 0.00 0.00 0.00 0.00 0.00 00:07:38.406 00:07:38.406 true 00:07:38.406 09:45:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d24a01f2-8fae-4b70-bd50-fbc8be2c5f9b 00:07:38.406 09:45:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:38.665 09:45:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:38.665 09:45:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:38.665 09:45:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4120241 00:07:39.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.232 Nvme0n1 : 3.00 23782.33 92.90 0.00 0.00 0.00 0.00 0.00 00:07:39.232 [2024-12-11T08:45:48.807Z] =================================================================================================================== 00:07:39.232 [2024-12-11T08:45:48.807Z] Total : 23782.33 92.90 0.00 0.00 0.00 0.00 0.00 00:07:39.232 00:07:40.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.605 Nvme0n1 : 4.00 23821.00 93.05 0.00 0.00 0.00 0.00 0.00 00:07:40.605 [2024-12-11T08:45:50.180Z] =================================================================================================================== 00:07:40.605 [2024-12-11T08:45:50.180Z] Total : 23821.00 93.05 0.00 0.00 0.00 0.00 0.00 00:07:40.605 00:07:41.540 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.540 Nvme0n1 : 5.00 23780.60 92.89 0.00 0.00 0.00 0.00 0.00 00:07:41.540 [2024-12-11T08:45:51.115Z] =================================================================================================================== 00:07:41.540 [2024-12-11T08:45:51.115Z] Total : 23780.60 92.89 0.00 0.00 0.00 0.00 0.00 00:07:41.540 00:07:42.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.475 Nvme0n1 : 6.00 23811.17 93.01 0.00 0.00 0.00 0.00 0.00 00:07:42.475 [2024-12-11T08:45:52.050Z] =================================================================================================================== 00:07:42.475 [2024-12-11T08:45:52.050Z] Total : 23811.17 93.01 0.00 0.00 0.00 0.00 0.00 00:07:42.475 00:07:43.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.412 Nvme0n1 : 7.00 23859.86 93.20 0.00 0.00 0.00 0.00 0.00 00:07:43.412 [2024-12-11T08:45:52.987Z] =================================================================================================================== 00:07:43.412 [2024-12-11T08:45:52.987Z] Total : 23859.86 93.20 0.00 0.00 0.00 0.00 0.00 00:07:43.412 00:07:44.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.348 Nvme0n1 : 8.00 23881.50 93.29 0.00 0.00 0.00 0.00 0.00 00:07:44.348 [2024-12-11T08:45:53.923Z] =================================================================================================================== 00:07:44.348 [2024-12-11T08:45:53.923Z] Total : 23881.50 93.29 0.00 0.00 0.00 0.00 0.00 00:07:44.348 00:07:45.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.285 Nvme0n1 : 9.00 23903.44 93.37 0.00 0.00 0.00 0.00 0.00 00:07:45.285 [2024-12-11T08:45:54.860Z] =================================================================================================================== 00:07:45.285 [2024-12-11T08:45:54.860Z] Total : 23903.44 93.37 0.00 0.00 0.00 0.00 0.00 00:07:45.285 00:07:46.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.230 Nvme0n1 : 10.00 23920.20 93.44 0.00 0.00 0.00 0.00 0.00 00:07:46.230 [2024-12-11T08:45:55.805Z] =================================================================================================================== 00:07:46.230 [2024-12-11T08:45:55.805Z] Total : 23920.20 93.44 0.00 0.00 0.00 0.00 0.00 00:07:46.230 00:07:46.230 00:07:46.230 Latency(us) 00:07:46.230 [2024-12-11T08:45:55.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.230 Nvme0n1 : 10.00 23922.61 93.45 0.00 0.00 5347.70 3151.97 10610.59 00:07:46.230 [2024-12-11T08:45:55.805Z] =================================================================================================================== 00:07:46.230 [2024-12-11T08:45:55.805Z] Total : 23922.61 93.45 0.00 0.00 5347.70 3151.97 10610.59 00:07:46.230 { 00:07:46.230 "results": [ 00:07:46.230 { 00:07:46.230 "job": "Nvme0n1", 00:07:46.230 "core_mask": "0x2", 00:07:46.230 "workload": "randwrite", 00:07:46.230 "status": "finished", 00:07:46.230 "queue_depth": 128, 00:07:46.230 "io_size": 4096, 00:07:46.230 "runtime": 10.004345, 00:07:46.230 "iops": 23922.605627854697, 00:07:46.230 "mibps": 93.44767823380741, 00:07:46.230 "io_failed": 0, 00:07:46.230 "io_timeout": 0, 00:07:46.230 "avg_latency_us": 5347.698442151004, 00:07:46.230 "min_latency_us": 3151.9695238095237, 00:07:46.230 "max_latency_us": 10610.590476190477 00:07:46.230 } 00:07:46.230 ], 00:07:46.230 "core_count": 1 00:07:46.231 } 00:07:46.231 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4120222 00:07:46.231 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 4120222 ']' 00:07:46.231 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 4120222 00:07:46.231 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:46.231 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.231 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4120222 00:07:46.489 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:46.489 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:46.489 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4120222' 00:07:46.489 killing process with pid 4120222 00:07:46.489 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 4120222 00:07:46.489 Received shutdown signal, test time was about 10.000000 seconds 00:07:46.489 00:07:46.489 Latency(us) 00:07:46.489 [2024-12-11T08:45:56.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.489 [2024-12-11T08:45:56.064Z] =================================================================================================================== 00:07:46.489 [2024-12-11T08:45:56.064Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:46.489 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 4120222 00:07:46.489 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:46.747 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:47.008 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d24a01f2-8fae-4b70-bd50-fbc8be2c5f9b 00:07:47.008 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:47.267 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:47.267 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:47.267 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4116905 00:07:47.267 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4116905 00:07:47.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4116905 Killed "${NVMF_APP[@]}" "$@" 00:07:47.267 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:47.267 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:47.267 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:47.267 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.267 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:47.267 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=4122061 00:07:47.267 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 4122061 00:07:47.267 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:47.267 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4122061 ']' 00:07:47.267 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.267 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.267 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.267 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.267 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:47.267 [2024-12-11 09:45:56.680914] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:07:47.267 [2024-12-11 09:45:56.680960] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.267 [2024-12-11 09:45:56.762746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.267 [2024-12-11 09:45:56.802695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.267 [2024-12-11 09:45:56.802729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.267 [2024-12-11 09:45:56.802736] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.267 [2024-12-11 09:45:56.802742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.267 [2024-12-11 09:45:56.802747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.267 [2024-12-11 09:45:56.803286] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.525 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.525 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:47.525 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:47.525 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:47.525 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:47.525 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.525 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:47.784 [2024-12-11 09:45:57.110167] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:47.784 [2024-12-11 09:45:57.110259] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:47.784 [2024-12-11 09:45:57.110285] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:47.784 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:47.784 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 31b57730-3ca4-4334-9acf-3b24a22941f1 00:07:47.784 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=31b57730-3ca4-4334-9acf-3b24a22941f1 00:07:47.784 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:47.784 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:47.784 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:47.784 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:47.784 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:47.784 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 31b57730-3ca4-4334-9acf-3b24a22941f1 -t 2000 00:07:48.043 [ 00:07:48.043 { 00:07:48.043 "name": "31b57730-3ca4-4334-9acf-3b24a22941f1", 00:07:48.043 "aliases": [ 00:07:48.043 "lvs/lvol" 00:07:48.043 ], 00:07:48.043 "product_name": "Logical Volume", 00:07:48.043 "block_size": 4096, 00:07:48.043 "num_blocks": 38912, 00:07:48.043 "uuid": "31b57730-3ca4-4334-9acf-3b24a22941f1", 00:07:48.043 "assigned_rate_limits": { 00:07:48.043 "rw_ios_per_sec": 0, 00:07:48.043 "rw_mbytes_per_sec": 0, 00:07:48.043 "r_mbytes_per_sec": 0, 00:07:48.043 "w_mbytes_per_sec": 0 00:07:48.043 }, 00:07:48.043 "claimed": false, 00:07:48.043 "zoned": false, 00:07:48.043 "supported_io_types": { 00:07:48.043 "read": true, 00:07:48.043 "write": true, 00:07:48.043 "unmap": true, 00:07:48.043 "flush": false, 00:07:48.043 "reset": true, 00:07:48.043 "nvme_admin": false, 00:07:48.043 "nvme_io": false, 00:07:48.043 "nvme_io_md": false, 00:07:48.043 "write_zeroes": true, 00:07:48.043 "zcopy": false, 00:07:48.043 "get_zone_info": false, 00:07:48.043 "zone_management": false, 00:07:48.043 "zone_append": false, 00:07:48.043 "compare": false, 00:07:48.043 "compare_and_write": false, 00:07:48.043 "abort": false, 00:07:48.043 "seek_hole": true, 00:07:48.043 "seek_data": true, 00:07:48.043 "copy": false, 00:07:48.043 "nvme_iov_md": false 00:07:48.043 }, 00:07:48.043 "driver_specific": { 00:07:48.043 "lvol": { 00:07:48.043 "lvol_store_uuid": "d24a01f2-8fae-4b70-bd50-fbc8be2c5f9b", 00:07:48.043 "base_bdev": "aio_bdev", 00:07:48.043 "thin_provision": false, 00:07:48.043 "num_allocated_clusters": 38, 00:07:48.043 "snapshot": false, 00:07:48.043 "clone": false, 00:07:48.043 "esnap_clone": false 00:07:48.043 } 00:07:48.043 } 00:07:48.043 } 00:07:48.043 ] 00:07:48.043 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:48.043 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d24a01f2-8fae-4b70-bd50-fbc8be2c5f9b 00:07:48.043 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:48.302 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:48.302 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d24a01f2-8fae-4b70-bd50-fbc8be2c5f9b 00:07:48.302 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:48.302 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:48.302 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:48.561 [2024-12-11 09:45:58.042907] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:48.562 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d24a01f2-8fae-4b70-bd50-fbc8be2c5f9b 00:07:48.562 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:48.562 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d24a01f2-8fae-4b70-bd50-fbc8be2c5f9b 00:07:48.562 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:48.562 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.562 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:48.562 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.562 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:48.562 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.562 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:48.562 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:48.562 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d24a01f2-8fae-4b70-bd50-fbc8be2c5f9b 00:07:48.819 request: 00:07:48.819 { 00:07:48.819 "uuid": "d24a01f2-8fae-4b70-bd50-fbc8be2c5f9b", 00:07:48.819 "method": "bdev_lvol_get_lvstores", 00:07:48.819 "req_id": 1 00:07:48.819 } 00:07:48.819 Got JSON-RPC error response 00:07:48.819 response: 00:07:48.819 { 00:07:48.819 "code": -19, 00:07:48.819 "message": "No such device" 00:07:48.819 } 00:07:48.819 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:48.819 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:48.819 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:48.819 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:48.819 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:49.077 aio_bdev 00:07:49.077 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 31b57730-3ca4-4334-9acf-3b24a22941f1 00:07:49.077 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=31b57730-3ca4-4334-9acf-3b24a22941f1 00:07:49.077 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:49.077 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:49.077 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:49.077 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:49.077 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:49.077 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 31b57730-3ca4-4334-9acf-3b24a22941f1 -t 2000 00:07:49.335 [ 00:07:49.335 { 00:07:49.335 "name": "31b57730-3ca4-4334-9acf-3b24a22941f1", 00:07:49.335 "aliases": [ 00:07:49.335 "lvs/lvol" 00:07:49.335 ], 00:07:49.335 "product_name": "Logical Volume", 00:07:49.335 "block_size": 4096, 00:07:49.335 "num_blocks": 38912, 00:07:49.335 "uuid": "31b57730-3ca4-4334-9acf-3b24a22941f1", 00:07:49.335 "assigned_rate_limits": { 00:07:49.335 "rw_ios_per_sec": 0, 00:07:49.335 "rw_mbytes_per_sec": 0, 00:07:49.335 "r_mbytes_per_sec": 0, 00:07:49.335 "w_mbytes_per_sec": 0 00:07:49.335 }, 00:07:49.335 "claimed": false, 00:07:49.335 "zoned": false, 00:07:49.335 "supported_io_types": { 00:07:49.335 "read": true, 00:07:49.335 "write": true, 00:07:49.335 "unmap": true, 00:07:49.335 "flush": false, 00:07:49.335 "reset": true, 00:07:49.335 "nvme_admin": false, 00:07:49.335 "nvme_io": false, 00:07:49.335 "nvme_io_md": false, 00:07:49.335 "write_zeroes": true, 00:07:49.335 "zcopy": false, 00:07:49.335 "get_zone_info": false, 00:07:49.335 "zone_management": false, 00:07:49.335 "zone_append": false, 00:07:49.335 "compare": false, 00:07:49.335 "compare_and_write": false, 00:07:49.335 "abort": false, 00:07:49.335 "seek_hole": true, 00:07:49.335 "seek_data": true, 00:07:49.335 "copy": false, 00:07:49.335 "nvme_iov_md": false 00:07:49.335 }, 00:07:49.335 "driver_specific": { 00:07:49.335 "lvol": { 00:07:49.335 "lvol_store_uuid": "d24a01f2-8fae-4b70-bd50-fbc8be2c5f9b", 00:07:49.335 "base_bdev": "aio_bdev", 00:07:49.335 "thin_provision": false, 00:07:49.335 "num_allocated_clusters": 38, 00:07:49.335 "snapshot": false, 00:07:49.335 "clone": false, 00:07:49.335 "esnap_clone": false 00:07:49.335 } 00:07:49.335 } 00:07:49.335 } 00:07:49.335 ] 00:07:49.335 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:49.335 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d24a01f2-8fae-4b70-bd50-fbc8be2c5f9b 00:07:49.335 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:49.593 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:49.593 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d24a01f2-8fae-4b70-bd50-fbc8be2c5f9b 00:07:49.593 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:49.852 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:49.852 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 31b57730-3ca4-4334-9acf-3b24a22941f1 00:07:49.852 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d24a01f2-8fae-4b70-bd50-fbc8be2c5f9b 00:07:50.110 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:50.368 00:07:50.368 real 0m16.957s 00:07:50.368 user 0m45.093s 00:07:50.368 sys 0m3.640s 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:50.368 ************************************ 00:07:50.368 END TEST lvs_grow_dirty 00:07:50.368 ************************************ 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:50.368 nvmf_trace.0 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:50.368 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:50.368 rmmod nvme_tcp 00:07:50.627 rmmod nvme_fabrics 00:07:50.627 rmmod nvme_keyring 00:07:50.627 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:50.627 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:50.627 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:50.627 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 4122061 ']' 00:07:50.627 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 4122061 00:07:50.627 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 4122061 ']' 00:07:50.627 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 4122061 00:07:50.627 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:50.627 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.627 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4122061 00:07:50.627 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.627 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.627 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4122061' 00:07:50.627 killing process with pid 4122061 00:07:50.627 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 4122061 00:07:50.627 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 4122061 00:07:50.627 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:50.627 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:50.627 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:50.627 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:50.884 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:50.884 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:50.884 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:50.884 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:50.884 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:50.884 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.884 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.884 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.787 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:52.787 00:07:52.787 real 0m43.798s 00:07:52.787 user 1m6.942s 00:07:52.787 sys 0m10.693s 00:07:52.787 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.787 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.787 ************************************ 00:07:52.787 END TEST nvmf_lvs_grow 00:07:52.787 ************************************ 00:07:52.787 09:46:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:52.787 09:46:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:52.787 09:46:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.787 09:46:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.787 ************************************ 00:07:52.787 START TEST nvmf_bdev_io_wait 00:07:52.787 ************************************ 00:07:52.787 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:53.046 * Looking for test storage... 00:07:53.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:53.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.046 --rc genhtml_branch_coverage=1 00:07:53.046 --rc genhtml_function_coverage=1 00:07:53.046 --rc genhtml_legend=1 00:07:53.046 --rc geninfo_all_blocks=1 00:07:53.046 --rc geninfo_unexecuted_blocks=1 00:07:53.046 00:07:53.046 ' 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:53.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.046 --rc genhtml_branch_coverage=1 00:07:53.046 --rc genhtml_function_coverage=1 00:07:53.046 --rc genhtml_legend=1 00:07:53.046 --rc geninfo_all_blocks=1 00:07:53.046 --rc geninfo_unexecuted_blocks=1 00:07:53.046 00:07:53.046 ' 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:53.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.046 --rc genhtml_branch_coverage=1 00:07:53.046 --rc genhtml_function_coverage=1 00:07:53.046 --rc genhtml_legend=1 00:07:53.046 --rc geninfo_all_blocks=1 00:07:53.046 --rc geninfo_unexecuted_blocks=1 00:07:53.046 00:07:53.046 ' 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:53.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.046 --rc genhtml_branch_coverage=1 00:07:53.046 --rc genhtml_function_coverage=1 00:07:53.046 --rc genhtml_legend=1 00:07:53.046 --rc geninfo_all_blocks=1 00:07:53.046 --rc geninfo_unexecuted_blocks=1 00:07:53.046 00:07:53.046 ' 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.046 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:53.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:53.047 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.616 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.616 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:59.616 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:59.616 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:59.616 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:59.616 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:59.616 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:59.616 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:59.617 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:59.617 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:59.617 Found net devices under 0000:af:00.0: cvl_0_0 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:59.617 Found net devices under 0000:af:00.1: cvl_0_1 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.617 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:59.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:07:59.877 00:07:59.877 --- 10.0.0.2 ping statistics --- 00:07:59.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.877 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:07:59.877 00:07:59.877 --- 10.0.0.1 ping statistics --- 00:07:59.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.877 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=4126644 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 4126644 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 4126644 ']' 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.877 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.877 [2024-12-11 09:46:09.428403] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:07:59.877 [2024-12-11 09:46:09.428453] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.136 [2024-12-11 09:46:09.514051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.136 [2024-12-11 09:46:09.554511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.136 [2024-12-11 09:46:09.554551] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.136 [2024-12-11 09:46:09.554558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.136 [2024-12-11 09:46:09.554564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.136 [2024-12-11 09:46:09.554569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.136 [2024-12-11 09:46:09.556113] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.136 [2024-12-11 09:46:09.556239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.136 [2024-12-11 09:46:09.556356] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.136 [2024-12-11 09:46:09.556356] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.136 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.136 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:00.136 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:00.136 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:00.136 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.136 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.136 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:00.136 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.136 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.136 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.136 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:00.136 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.136 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.137 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.137 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:00.137 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.137 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.137 [2024-12-11 09:46:09.700204] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.137 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.137 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:00.137 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.137 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.397 Malloc0 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.397 [2024-12-11 09:46:09.755427] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4126822 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4126824 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:00.397 { 00:08:00.397 "params": { 00:08:00.397 "name": "Nvme$subsystem", 00:08:00.397 "trtype": "$TEST_TRANSPORT", 00:08:00.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:00.397 "adrfam": "ipv4", 00:08:00.397 "trsvcid": "$NVMF_PORT", 00:08:00.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:00.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:00.397 "hdgst": ${hdgst:-false}, 00:08:00.397 "ddgst": ${ddgst:-false} 00:08:00.397 }, 00:08:00.397 "method": "bdev_nvme_attach_controller" 00:08:00.397 } 00:08:00.397 EOF 00:08:00.397 )") 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4126826 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:00.397 { 00:08:00.397 "params": { 00:08:00.397 "name": "Nvme$subsystem", 00:08:00.397 "trtype": "$TEST_TRANSPORT", 00:08:00.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:00.397 "adrfam": "ipv4", 00:08:00.397 "trsvcid": "$NVMF_PORT", 00:08:00.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:00.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:00.397 "hdgst": ${hdgst:-false}, 00:08:00.397 "ddgst": ${ddgst:-false} 00:08:00.397 }, 00:08:00.397 "method": "bdev_nvme_attach_controller" 00:08:00.397 } 00:08:00.397 EOF 00:08:00.397 )") 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4126829 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:00.397 { 00:08:00.397 "params": { 00:08:00.397 "name": "Nvme$subsystem", 00:08:00.397 "trtype": "$TEST_TRANSPORT", 00:08:00.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:00.397 "adrfam": "ipv4", 00:08:00.397 "trsvcid": "$NVMF_PORT", 00:08:00.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:00.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:00.397 "hdgst": ${hdgst:-false}, 00:08:00.397 "ddgst": ${ddgst:-false} 00:08:00.397 }, 00:08:00.397 "method": "bdev_nvme_attach_controller" 00:08:00.397 } 00:08:00.397 EOF 00:08:00.397 )") 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:00.397 { 00:08:00.397 "params": { 00:08:00.397 "name": "Nvme$subsystem", 00:08:00.397 "trtype": "$TEST_TRANSPORT", 00:08:00.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:00.397 "adrfam": "ipv4", 00:08:00.397 "trsvcid": "$NVMF_PORT", 00:08:00.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:00.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:00.397 "hdgst": ${hdgst:-false}, 00:08:00.397 "ddgst": ${ddgst:-false} 00:08:00.397 }, 00:08:00.397 "method": "bdev_nvme_attach_controller" 00:08:00.397 } 00:08:00.397 EOF 00:08:00.397 )") 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4126822 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:00.397 "params": { 00:08:00.397 "name": "Nvme1", 00:08:00.397 "trtype": "tcp", 00:08:00.397 "traddr": "10.0.0.2", 00:08:00.397 "adrfam": "ipv4", 00:08:00.397 "trsvcid": "4420", 00:08:00.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:00.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:00.397 "hdgst": false, 00:08:00.397 "ddgst": false 00:08:00.397 }, 00:08:00.397 "method": "bdev_nvme_attach_controller" 00:08:00.397 }' 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:00.397 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:00.397 "params": { 00:08:00.397 "name": "Nvme1", 00:08:00.397 "trtype": "tcp", 00:08:00.397 "traddr": "10.0.0.2", 00:08:00.397 "adrfam": "ipv4", 00:08:00.397 "trsvcid": "4420", 00:08:00.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:00.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:00.397 "hdgst": false, 00:08:00.397 "ddgst": false 00:08:00.397 }, 00:08:00.398 "method": "bdev_nvme_attach_controller" 00:08:00.398 }' 00:08:00.398 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:00.398 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:00.398 "params": { 00:08:00.398 "name": "Nvme1", 00:08:00.398 "trtype": "tcp", 00:08:00.398 "traddr": "10.0.0.2", 00:08:00.398 "adrfam": "ipv4", 00:08:00.398 "trsvcid": "4420", 00:08:00.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:00.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:00.398 "hdgst": false, 00:08:00.398 "ddgst": false 00:08:00.398 }, 00:08:00.398 "method": "bdev_nvme_attach_controller" 00:08:00.398 }' 00:08:00.398 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:00.398 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:00.398 "params": { 00:08:00.398 "name": "Nvme1", 00:08:00.398 "trtype": "tcp", 00:08:00.398 "traddr": "10.0.0.2", 00:08:00.398 "adrfam": "ipv4", 00:08:00.398 "trsvcid": "4420", 00:08:00.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:00.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:00.398 "hdgst": false, 00:08:00.398 "ddgst": false 00:08:00.398 }, 00:08:00.398 "method": "bdev_nvme_attach_controller" 00:08:00.398 }' 00:08:00.398 [2024-12-11 09:46:09.808829] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:08:00.398 [2024-12-11 09:46:09.808831] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:08:00.398 [2024-12-11 09:46:09.808881] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-11 09:46:09.808881] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:00.398 --proc-type=auto ] 00:08:00.398 [2024-12-11 09:46:09.808929] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:08:00.398 [2024-12-11 09:46:09.808964] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:00.398 [2024-12-11 09:46:09.812132] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:08:00.398 [2024-12-11 09:46:09.812177] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:00.657 [2024-12-11 09:46:10.010451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.657 [2024-12-11 09:46:10.062567] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:08:00.657 [2024-12-11 09:46:10.102767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.657 [2024-12-11 09:46:10.147405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:00.657 [2024-12-11 09:46:10.201735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.915 [2024-12-11 09:46:10.251046] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:08:00.915 [2024-12-11 09:46:10.259984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.915 [2024-12-11 09:46:10.301707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:08:00.915 Running I/O for 1 seconds... 00:08:00.915 Running I/O for 1 seconds... 00:08:00.915 Running I/O for 1 seconds... 00:08:00.915 Running I/O for 1 seconds... 00:08:01.847 10429.00 IOPS, 40.74 MiB/s 00:08:01.847 Latency(us) 00:08:01.847 [2024-12-11T08:46:11.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.847 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:01.847 Nvme1n1 : 1.01 10485.24 40.96 0.00 0.00 12151.07 6303.94 17226.61 00:08:01.847 [2024-12-11T08:46:11.422Z] =================================================================================================================== 00:08:01.847 [2024-12-11T08:46:11.422Z] Total : 10485.24 40.96 0.00 0.00 12151.07 6303.94 17226.61 00:08:02.106 243544.00 IOPS, 951.34 MiB/s 00:08:02.106 Latency(us) 00:08:02.106 [2024-12-11T08:46:11.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.106 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:02.106 Nvme1n1 : 1.00 243181.39 949.93 0.00 0.00 524.26 220.40 1661.81 00:08:02.106 [2024-12-11T08:46:11.681Z] =================================================================================================================== 00:08:02.106 [2024-12-11T08:46:11.681Z] Total : 243181.39 949.93 0.00 0.00 524.26 220.40 1661.81 00:08:02.106 13218.00 IOPS, 51.63 MiB/s [2024-12-11T08:46:11.681Z] 9154.00 IOPS, 35.76 MiB/s 00:08:02.106 Latency(us) 00:08:02.106 [2024-12-11T08:46:11.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.106 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:02.106 Nvme1n1 : 1.01 13289.97 51.91 0.00 0.00 9605.86 3666.90 22719.15 00:08:02.106 [2024-12-11T08:46:11.681Z] =================================================================================================================== 00:08:02.106 [2024-12-11T08:46:11.681Z] Total : 13289.97 51.91 0.00 0.00 9605.86 3666.90 22719.15 00:08:02.106 00:08:02.106 Latency(us) 00:08:02.106 [2024-12-11T08:46:11.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.106 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:02.106 Nvme1n1 : 1.01 9217.56 36.01 0.00 0.00 13835.91 6116.69 24591.60 00:08:02.106 [2024-12-11T08:46:11.681Z] =================================================================================================================== 00:08:02.106 [2024-12-11T08:46:11.681Z] Total : 9217.56 36.01 0.00 0.00 13835.91 6116.69 24591.60 00:08:02.106 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4126824 00:08:02.106 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4126826 00:08:02.106 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4126829 00:08:02.106 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.106 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.106 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.106 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.106 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:02.106 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:02.106 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:02.106 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:02.106 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:02.106 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:02.106 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:02.106 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:02.106 rmmod nvme_tcp 00:08:02.106 rmmod nvme_fabrics 00:08:02.106 rmmod nvme_keyring 00:08:02.106 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 4126644 ']' 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 4126644 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 4126644 ']' 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 4126644 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4126644 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4126644' 00:08:02.366 killing process with pid 4126644 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 4126644 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 4126644 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.366 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.903 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:04.903 00:08:04.903 real 0m11.616s 00:08:04.903 user 0m16.385s 00:08:04.903 sys 0m6.835s 00:08:04.903 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.903 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.903 ************************************ 00:08:04.903 END TEST nvmf_bdev_io_wait 00:08:04.903 ************************************ 00:08:04.903 09:46:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:04.903 09:46:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:04.903 09:46:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:04.903 ************************************ 00:08:04.903 START TEST nvmf_queue_depth 00:08:04.903 ************************************ 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:04.903 * Looking for test storage... 00:08:04.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:04.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.903 --rc genhtml_branch_coverage=1 00:08:04.903 --rc genhtml_function_coverage=1 00:08:04.903 --rc genhtml_legend=1 00:08:04.903 --rc geninfo_all_blocks=1 00:08:04.903 --rc geninfo_unexecuted_blocks=1 00:08:04.903 00:08:04.903 ' 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:04.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.903 --rc genhtml_branch_coverage=1 00:08:04.903 --rc genhtml_function_coverage=1 00:08:04.903 --rc genhtml_legend=1 00:08:04.903 --rc geninfo_all_blocks=1 00:08:04.903 --rc geninfo_unexecuted_blocks=1 00:08:04.903 00:08:04.903 ' 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:04.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.903 --rc genhtml_branch_coverage=1 00:08:04.903 --rc genhtml_function_coverage=1 00:08:04.903 --rc genhtml_legend=1 00:08:04.903 --rc geninfo_all_blocks=1 00:08:04.903 --rc geninfo_unexecuted_blocks=1 00:08:04.903 00:08:04.903 ' 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:04.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.903 --rc genhtml_branch_coverage=1 00:08:04.903 --rc genhtml_function_coverage=1 00:08:04.903 --rc genhtml_legend=1 00:08:04.903 --rc geninfo_all_blocks=1 00:08:04.903 --rc geninfo_unexecuted_blocks=1 00:08:04.903 00:08:04.903 ' 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.903 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:04.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:04.904 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:11.477 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:11.477 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:11.477 Found net devices under 0000:af:00.0: cvl_0_0 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.477 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:11.478 Found net devices under 0000:af:00.1: cvl_0_1 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:11.478 09:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:11.478 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:11.478 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:11.478 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:11.478 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:11.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:08:11.478 00:08:11.478 --- 10.0.0.2 ping statistics --- 00:08:11.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.478 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:08:11.478 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:11.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:08:11.478 00:08:11.478 --- 10.0.0.1 ping statistics --- 00:08:11.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.478 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:08:11.478 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.478 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:11.478 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:11.478 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.478 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:11.478 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:11.478 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.478 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:11.478 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:11.737 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:11.737 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:11.737 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:11.737 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:11.737 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=4131095 00:08:11.737 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:11.737 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 4131095 00:08:11.737 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4131095 ']' 00:08:11.737 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.737 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.737 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.737 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.737 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:11.737 [2024-12-11 09:46:21.120859] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:08:11.737 [2024-12-11 09:46:21.120903] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.737 [2024-12-11 09:46:21.205866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.737 [2024-12-11 09:46:21.243426] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.737 [2024-12-11 09:46:21.243461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.737 [2024-12-11 09:46:21.243467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.737 [2024-12-11 09:46:21.243473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.737 [2024-12-11 09:46:21.243478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.737 [2024-12-11 09:46:21.244004] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.672 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.672 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:12.672 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:12.672 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:12.672 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.672 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.672 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.672 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.672 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.672 [2024-12-11 09:46:21.992557] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.672 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.672 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:12.672 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.672 09:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.672 Malloc0 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.672 [2024-12-11 09:46:22.042848] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4131217 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4131217 /var/tmp/bdevperf.sock 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4131217 ']' 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:12.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:12.672 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.673 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.673 [2024-12-11 09:46:22.094830] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:08:12.673 [2024-12-11 09:46:22.094879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4131217 ] 00:08:12.673 [2024-12-11 09:46:22.176414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.673 [2024-12-11 09:46:22.216849] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.931 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.931 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:12.931 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:12.931 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.931 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.931 NVMe0n1 00:08:12.931 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.931 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:13.190 Running I/O for 10 seconds... 00:08:15.061 12288.00 IOPS, 48.00 MiB/s [2024-12-11T08:46:26.018Z] 12291.50 IOPS, 48.01 MiB/s [2024-12-11T08:46:26.953Z] 12366.33 IOPS, 48.31 MiB/s [2024-12-11T08:46:27.941Z] 12507.50 IOPS, 48.86 MiB/s [2024-12-11T08:46:28.971Z] 12485.40 IOPS, 48.77 MiB/s [2024-12-11T08:46:29.907Z] 12542.00 IOPS, 48.99 MiB/s [2024-12-11T08:46:30.841Z] 12566.14 IOPS, 49.09 MiB/s [2024-12-11T08:46:31.777Z] 12539.88 IOPS, 48.98 MiB/s [2024-12-11T08:46:32.712Z] 12594.33 IOPS, 49.20 MiB/s [2024-12-11T08:46:32.712Z] 12583.70 IOPS, 49.16 MiB/s 00:08:23.137 Latency(us) 00:08:23.137 [2024-12-11T08:46:32.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.137 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:23.137 Verification LBA range: start 0x0 length 0x4000 00:08:23.137 NVMe0n1 : 10.06 12613.43 49.27 0.00 0.00 80936.86 18974.23 54426.09 00:08:23.137 [2024-12-11T08:46:32.713Z] =================================================================================================================== 00:08:23.138 [2024-12-11T08:46:32.713Z] Total : 12613.43 49.27 0.00 0.00 80936.86 18974.23 54426.09 00:08:23.138 { 00:08:23.138 "results": [ 00:08:23.138 { 00:08:23.138 "job": "NVMe0n1", 00:08:23.138 "core_mask": "0x1", 00:08:23.138 "workload": "verify", 00:08:23.138 "status": "finished", 00:08:23.138 "verify_range": { 00:08:23.138 "start": 0, 00:08:23.138 "length": 16384 00:08:23.138 }, 00:08:23.138 "queue_depth": 1024, 00:08:23.138 "io_size": 4096, 00:08:23.138 "runtime": 10.057612, 00:08:23.138 "iops": 12613.431498451124, 00:08:23.138 "mibps": 49.2712167908247, 00:08:23.138 "io_failed": 0, 00:08:23.138 "io_timeout": 0, 00:08:23.138 "avg_latency_us": 80936.85729561526, 00:08:23.138 "min_latency_us": 18974.23238095238, 00:08:23.138 "max_latency_us": 54426.08761904762 00:08:23.138 } 00:08:23.138 ], 00:08:23.138 "core_count": 1 00:08:23.138 } 00:08:23.138 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4131217 00:08:23.138 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4131217 ']' 00:08:23.138 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4131217 00:08:23.138 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:23.138 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.138 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4131217 00:08:23.396 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:23.396 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:23.396 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4131217' 00:08:23.396 killing process with pid 4131217 00:08:23.397 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4131217 00:08:23.397 Received shutdown signal, test time was about 10.000000 seconds 00:08:23.397 00:08:23.397 Latency(us) 00:08:23.397 [2024-12-11T08:46:32.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.397 [2024-12-11T08:46:32.972Z] =================================================================================================================== 00:08:23.397 [2024-12-11T08:46:32.972Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:23.397 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4131217 00:08:23.397 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:23.397 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:23.397 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:23.397 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:23.397 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:23.397 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:23.397 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:23.397 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:23.397 rmmod nvme_tcp 00:08:23.397 rmmod nvme_fabrics 00:08:23.397 rmmod nvme_keyring 00:08:23.656 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:23.656 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:23.656 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:23.656 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 4131095 ']' 00:08:23.656 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 4131095 00:08:23.656 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4131095 ']' 00:08:23.656 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4131095 00:08:23.656 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:23.656 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.656 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4131095 00:08:23.656 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:23.656 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:23.656 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4131095' 00:08:23.656 killing process with pid 4131095 00:08:23.656 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4131095 00:08:23.656 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4131095 00:08:23.656 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:23.656 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:23.656 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:23.656 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:23.656 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:23.656 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:23.656 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:23.656 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:23.656 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:23.656 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.656 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.656 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:26.191 00:08:26.191 real 0m21.251s 00:08:26.191 user 0m24.201s 00:08:26.191 sys 0m6.665s 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:26.191 ************************************ 00:08:26.191 END TEST nvmf_queue_depth 00:08:26.191 ************************************ 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:26.191 ************************************ 00:08:26.191 START TEST nvmf_target_multipath 00:08:26.191 ************************************ 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:26.191 * Looking for test storage... 00:08:26.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:26.191 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:26.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.192 --rc genhtml_branch_coverage=1 00:08:26.192 --rc genhtml_function_coverage=1 00:08:26.192 --rc genhtml_legend=1 00:08:26.192 --rc geninfo_all_blocks=1 00:08:26.192 --rc geninfo_unexecuted_blocks=1 00:08:26.192 00:08:26.192 ' 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:26.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.192 --rc genhtml_branch_coverage=1 00:08:26.192 --rc genhtml_function_coverage=1 00:08:26.192 --rc genhtml_legend=1 00:08:26.192 --rc geninfo_all_blocks=1 00:08:26.192 --rc geninfo_unexecuted_blocks=1 00:08:26.192 00:08:26.192 ' 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:26.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.192 --rc genhtml_branch_coverage=1 00:08:26.192 --rc genhtml_function_coverage=1 00:08:26.192 --rc genhtml_legend=1 00:08:26.192 --rc geninfo_all_blocks=1 00:08:26.192 --rc geninfo_unexecuted_blocks=1 00:08:26.192 00:08:26.192 ' 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:26.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.192 --rc genhtml_branch_coverage=1 00:08:26.192 --rc genhtml_function_coverage=1 00:08:26.192 --rc genhtml_legend=1 00:08:26.192 --rc geninfo_all_blocks=1 00:08:26.192 --rc geninfo_unexecuted_blocks=1 00:08:26.192 00:08:26.192 ' 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:26.192 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.762 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:32.763 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:32.763 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:32.763 Found net devices under 0000:af:00.0: cvl_0_0 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:32.763 Found net devices under 0000:af:00.1: cvl_0_1 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.763 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:33.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:08:33.022 00:08:33.022 --- 10.0.0.2 ping statistics --- 00:08:33.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.022 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:08:33.022 00:08:33.022 --- 10.0.0.1 ping statistics --- 00:08:33.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.022 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:33.022 only one NIC for nvmf test 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:33.022 rmmod nvme_tcp 00:08:33.022 rmmod nvme_fabrics 00:08:33.022 rmmod nvme_keyring 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.022 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.559 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:35.560 00:08:35.560 real 0m9.236s 00:08:35.560 user 0m2.023s 00:08:35.560 sys 0m5.217s 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:35.560 ************************************ 00:08:35.560 END TEST nvmf_target_multipath 00:08:35.560 ************************************ 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:35.560 ************************************ 00:08:35.560 START TEST nvmf_zcopy 00:08:35.560 ************************************ 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:35.560 * Looking for test storage... 00:08:35.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:35.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.560 --rc genhtml_branch_coverage=1 00:08:35.560 --rc genhtml_function_coverage=1 00:08:35.560 --rc genhtml_legend=1 00:08:35.560 --rc geninfo_all_blocks=1 00:08:35.560 --rc geninfo_unexecuted_blocks=1 00:08:35.560 00:08:35.560 ' 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:35.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.560 --rc genhtml_branch_coverage=1 00:08:35.560 --rc genhtml_function_coverage=1 00:08:35.560 --rc genhtml_legend=1 00:08:35.560 --rc geninfo_all_blocks=1 00:08:35.560 --rc geninfo_unexecuted_blocks=1 00:08:35.560 00:08:35.560 ' 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:35.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.560 --rc genhtml_branch_coverage=1 00:08:35.560 --rc genhtml_function_coverage=1 00:08:35.560 --rc genhtml_legend=1 00:08:35.560 --rc geninfo_all_blocks=1 00:08:35.560 --rc geninfo_unexecuted_blocks=1 00:08:35.560 00:08:35.560 ' 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:35.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.560 --rc genhtml_branch_coverage=1 00:08:35.560 --rc genhtml_function_coverage=1 00:08:35.560 --rc genhtml_legend=1 00:08:35.560 --rc geninfo_all_blocks=1 00:08:35.560 --rc geninfo_unexecuted_blocks=1 00:08:35.560 00:08:35.560 ' 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.560 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:35.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:35.561 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:42.131 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:42.131 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:42.131 Found net devices under 0000:af:00.0: cvl_0_0 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:42.131 Found net devices under 0000:af:00.1: cvl_0_1 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:42.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:08:42.131 00:08:42.131 --- 10.0.0.2 ping statistics --- 00:08:42.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.131 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:08:42.131 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:08:42.131 00:08:42.131 --- 10.0.0.1 ping statistics --- 00:08:42.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.131 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=4140960 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 4140960 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 4140960 ']' 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.132 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.390 [2024-12-11 09:46:51.747053] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:08:42.390 [2024-12-11 09:46:51.747102] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.390 [2024-12-11 09:46:51.829073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.390 [2024-12-11 09:46:51.867923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.390 [2024-12-11 09:46:51.867957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.391 [2024-12-11 09:46:51.867964] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.391 [2024-12-11 09:46:51.867970] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.391 [2024-12-11 09:46:51.867975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.391 [2024-12-11 09:46:51.868530] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.391 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.391 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:42.391 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:42.391 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:42.391 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.650 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.650 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:42.650 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:42.650 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.650 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.650 [2024-12-11 09:46:52.003730] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.650 [2024-12-11 09:46:52.023910] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.650 malloc0 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:42.650 { 00:08:42.650 "params": { 00:08:42.650 "name": "Nvme$subsystem", 00:08:42.650 "trtype": "$TEST_TRANSPORT", 00:08:42.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:42.650 "adrfam": "ipv4", 00:08:42.650 "trsvcid": "$NVMF_PORT", 00:08:42.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:42.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:42.650 "hdgst": ${hdgst:-false}, 00:08:42.650 "ddgst": ${ddgst:-false} 00:08:42.650 }, 00:08:42.650 "method": "bdev_nvme_attach_controller" 00:08:42.650 } 00:08:42.650 EOF 00:08:42.650 )") 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:42.650 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:42.650 "params": { 00:08:42.650 "name": "Nvme1", 00:08:42.650 "trtype": "tcp", 00:08:42.650 "traddr": "10.0.0.2", 00:08:42.650 "adrfam": "ipv4", 00:08:42.650 "trsvcid": "4420", 00:08:42.650 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:42.650 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:42.650 "hdgst": false, 00:08:42.650 "ddgst": false 00:08:42.650 }, 00:08:42.650 "method": "bdev_nvme_attach_controller" 00:08:42.650 }' 00:08:42.650 [2024-12-11 09:46:52.108810] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:08:42.650 [2024-12-11 09:46:52.108850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4140981 ] 00:08:42.650 [2024-12-11 09:46:52.184922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.909 [2024-12-11 09:46:52.224529] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.909 Running I/O for 10 seconds... 00:08:45.220 8692.00 IOPS, 67.91 MiB/s [2024-12-11T08:46:55.730Z] 8773.00 IOPS, 68.54 MiB/s [2024-12-11T08:46:56.664Z] 8783.33 IOPS, 68.62 MiB/s [2024-12-11T08:46:57.600Z] 8787.25 IOPS, 68.65 MiB/s [2024-12-11T08:46:58.534Z] 8801.80 IOPS, 68.76 MiB/s [2024-12-11T08:46:59.910Z] 8812.17 IOPS, 68.85 MiB/s [2024-12-11T08:47:00.844Z] 8810.00 IOPS, 68.83 MiB/s [2024-12-11T08:47:01.780Z] 8785.75 IOPS, 68.64 MiB/s [2024-12-11T08:47:02.716Z] 8787.11 IOPS, 68.65 MiB/s [2024-12-11T08:47:02.716Z] 8788.40 IOPS, 68.66 MiB/s 00:08:53.141 Latency(us) 00:08:53.141 [2024-12-11T08:47:02.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.141 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:53.141 Verification LBA range: start 0x0 length 0x1000 00:08:53.141 Nvme1n1 : 10.01 8788.36 68.66 0.00 0.00 14523.55 1825.65 23468.13 00:08:53.141 [2024-12-11T08:47:02.716Z] =================================================================================================================== 00:08:53.141 [2024-12-11T08:47:02.716Z] Total : 8788.36 68.66 0.00 0.00 14523.55 1825.65 23468.13 00:08:53.141 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4142796 00:08:53.141 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:53.141 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.141 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:53.141 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:53.141 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:53.141 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:53.141 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:53.141 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:53.141 { 00:08:53.141 "params": { 00:08:53.141 "name": "Nvme$subsystem", 00:08:53.141 "trtype": "$TEST_TRANSPORT", 00:08:53.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.141 "adrfam": "ipv4", 00:08:53.141 "trsvcid": "$NVMF_PORT", 00:08:53.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.141 "hdgst": ${hdgst:-false}, 00:08:53.141 "ddgst": ${ddgst:-false} 00:08:53.141 }, 00:08:53.141 "method": "bdev_nvme_attach_controller" 00:08:53.141 } 00:08:53.141 EOF 00:08:53.141 )") 00:08:53.141 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:53.141 [2024-12-11 09:47:02.661750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.141 [2024-12-11 09:47:02.661782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.141 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:53.141 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:53.141 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:53.141 "params": { 00:08:53.141 "name": "Nvme1", 00:08:53.141 "trtype": "tcp", 00:08:53.141 "traddr": "10.0.0.2", 00:08:53.141 "adrfam": "ipv4", 00:08:53.141 "trsvcid": "4420", 00:08:53.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.141 "hdgst": false, 00:08:53.141 "ddgst": false 00:08:53.141 }, 00:08:53.141 "method": "bdev_nvme_attach_controller" 00:08:53.141 }' 00:08:53.141 [2024-12-11 09:47:02.673738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.141 [2024-12-11 09:47:02.673750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.141 [2024-12-11 09:47:02.685766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.141 [2024-12-11 09:47:02.685776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.141 [2024-12-11 09:47:02.697796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.141 [2024-12-11 09:47:02.697805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.141 [2024-12-11 09:47:02.698007] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:08:53.141 [2024-12-11 09:47:02.698046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4142796 ] 00:08:53.141 [2024-12-11 09:47:02.709831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.141 [2024-12-11 09:47:02.709842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.721862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.721871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.733894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.733904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.745926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.745934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.757958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.757968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.769991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.770001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.774728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.400 [2024-12-11 09:47:02.782038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.782055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.794054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.794065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.806084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.806093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.814928] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.400 [2024-12-11 09:47:02.818116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.818127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.830162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.830181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.842188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.842205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.854222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.854236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.866249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.866261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.878282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.878294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.890311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.890321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.902344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.902352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.914419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.914439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.926448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.926461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.938476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.938488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.950505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.950514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.400 [2024-12-11 09:47:02.962536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.400 [2024-12-11 09:47:02.962545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.401 [2024-12-11 09:47:02.974573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.401 [2024-12-11 09:47:02.974586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.659 [2024-12-11 09:47:02.986627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.659 [2024-12-11 09:47:02.986640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.659 [2024-12-11 09:47:02.998656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.659 [2024-12-11 09:47:02.998664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.660 [2024-12-11 09:47:03.010689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.660 [2024-12-11 09:47:03.010698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.660 [2024-12-11 09:47:03.022721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.660 [2024-12-11 09:47:03.022729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.660 [2024-12-11 09:47:03.034775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.660 [2024-12-11 09:47:03.034788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.660 [2024-12-11 09:47:03.046803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.660 [2024-12-11 09:47:03.046812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.660 [2024-12-11 09:47:03.058839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.660 [2024-12-11 09:47:03.058848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.660 [2024-12-11 09:47:03.070875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.660 [2024-12-11 09:47:03.070887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.660 [2024-12-11 09:47:03.082904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.660 [2024-12-11 09:47:03.082914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.660 [2024-12-11 09:47:03.125493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.660 [2024-12-11 09:47:03.125511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.660 [2024-12-11 09:47:03.135052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.660 [2024-12-11 09:47:03.135064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.660 Running I/O for 5 seconds... 00:08:53.660 [2024-12-11 09:47:03.151463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.660 [2024-12-11 09:47:03.151482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.660 [2024-12-11 09:47:03.162552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.660 [2024-12-11 09:47:03.162571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.660 [2024-12-11 09:47:03.176763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.660 [2024-12-11 09:47:03.176786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.660 [2024-12-11 09:47:03.190820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.660 [2024-12-11 09:47:03.190839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.660 [2024-12-11 09:47:03.204642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.660 [2024-12-11 09:47:03.204660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.660 [2024-12-11 09:47:03.218663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.660 [2024-12-11 09:47:03.218682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.660 [2024-12-11 09:47:03.232443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.660 [2024-12-11 09:47:03.232462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.246083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.246101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.259710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.259728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.273562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.273580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.286958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.286976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.300644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.300663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.314801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.314820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.326002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.326021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.339899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.339917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.353282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.353299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.366918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.366937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.380935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.380954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.394604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.394622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.408492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.408513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.421774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.421792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.435108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.435131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.448745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.448763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.462659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.462677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.476138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.476155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.919 [2024-12-11 09:47:03.489960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.919 [2024-12-11 09:47:03.489978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.178 [2024-12-11 09:47:03.503343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.178 [2024-12-11 09:47:03.503362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.178 [2024-12-11 09:47:03.517056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.178 [2024-12-11 09:47:03.517074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.178 [2024-12-11 09:47:03.530487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.178 [2024-12-11 09:47:03.530505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.178 [2024-12-11 09:47:03.544166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.178 [2024-12-11 09:47:03.544184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.178 [2024-12-11 09:47:03.557918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.178 [2024-12-11 09:47:03.557936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.178 [2024-12-11 09:47:03.571588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.178 [2024-12-11 09:47:03.571607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.178 [2024-12-11 09:47:03.585770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.178 [2024-12-11 09:47:03.585791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.178 [2024-12-11 09:47:03.599420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.178 [2024-12-11 09:47:03.599439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.178 [2024-12-11 09:47:03.613342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.178 [2024-12-11 09:47:03.613360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.178 [2024-12-11 09:47:03.626463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.178 [2024-12-11 09:47:03.626481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.178 [2024-12-11 09:47:03.640952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.178 [2024-12-11 09:47:03.640971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.178 [2024-12-11 09:47:03.654632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.178 [2024-12-11 09:47:03.654651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.178 [2024-12-11 09:47:03.668450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.178 [2024-12-11 09:47:03.668470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.178 [2024-12-11 09:47:03.681876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.178 [2024-12-11 09:47:03.681896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.178 [2024-12-11 09:47:03.695830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.178 [2024-12-11 09:47:03.695854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.178 [2024-12-11 09:47:03.709675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.178 [2024-12-11 09:47:03.709697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.178 [2024-12-11 09:47:03.723856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.178 [2024-12-11 09:47:03.723875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.179 [2024-12-11 09:47:03.737186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.179 [2024-12-11 09:47:03.737205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.179 [2024-12-11 09:47:03.750876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.179 [2024-12-11 09:47:03.750895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.437 [2024-12-11 09:47:03.764605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.437 [2024-12-11 09:47:03.764623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.437 [2024-12-11 09:47:03.778173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.437 [2024-12-11 09:47:03.778192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.437 [2024-12-11 09:47:03.791581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.437 [2024-12-11 09:47:03.791600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.437 [2024-12-11 09:47:03.805480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.437 [2024-12-11 09:47:03.805499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.437 [2024-12-11 09:47:03.818913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.437 [2024-12-11 09:47:03.818932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.437 [2024-12-11 09:47:03.832273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.437 [2024-12-11 09:47:03.832292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.437 [2024-12-11 09:47:03.846097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.437 [2024-12-11 09:47:03.846116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.437 [2024-12-11 09:47:03.859858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.437 [2024-12-11 09:47:03.859877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.437 [2024-12-11 09:47:03.873414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.437 [2024-12-11 09:47:03.873439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.437 [2024-12-11 09:47:03.887159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.437 [2024-12-11 09:47:03.887178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.437 [2024-12-11 09:47:03.901048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.437 [2024-12-11 09:47:03.901067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.437 [2024-12-11 09:47:03.914587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.437 [2024-12-11 09:47:03.914605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.437 [2024-12-11 09:47:03.928639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.437 [2024-12-11 09:47:03.928658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.437 [2024-12-11 09:47:03.942783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.437 [2024-12-11 09:47:03.942801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.437 [2024-12-11 09:47:03.956378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.437 [2024-12-11 09:47:03.956403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.437 [2024-12-11 09:47:03.970196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.437 [2024-12-11 09:47:03.970215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.437 [2024-12-11 09:47:03.984572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.437 [2024-12-11 09:47:03.984591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.437 [2024-12-11 09:47:03.999738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.437 [2024-12-11 09:47:03.999758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 [2024-12-11 09:47:04.013476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.013495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 [2024-12-11 09:47:04.027165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.027183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 [2024-12-11 09:47:04.040876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.040894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 [2024-12-11 09:47:04.054627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.054644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 [2024-12-11 09:47:04.068726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.068745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 [2024-12-11 09:47:04.082207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.082231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 [2024-12-11 09:47:04.096375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.096393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 [2024-12-11 09:47:04.109706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.109724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 [2024-12-11 09:47:04.123159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.123177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 [2024-12-11 09:47:04.136940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.136958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 17109.00 IOPS, 133.66 MiB/s [2024-12-11T08:47:04.271Z] [2024-12-11 09:47:04.150242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.150260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 [2024-12-11 09:47:04.164144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.164162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 [2024-12-11 09:47:04.177694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.177713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 [2024-12-11 09:47:04.190816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.190834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 [2024-12-11 09:47:04.204641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.204658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 [2024-12-11 09:47:04.218305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.218324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 [2024-12-11 09:47:04.231987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.232005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 [2024-12-11 09:47:04.245849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.245868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.696 [2024-12-11 09:47:04.259683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.696 [2024-12-11 09:47:04.259701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.273290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.273309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.286826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.286845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.300525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.300543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.314476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.314494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.328182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.328201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.341687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.341705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.355573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.355591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.369455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.369473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.383152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.383171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.396411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.396429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.410206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.410230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.423881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.423899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.437347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.437365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.451125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.451143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.464498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.464516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.477920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.477938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.491642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.491661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.505485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.505503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.955 [2024-12-11 09:47:04.518882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.955 [2024-12-11 09:47:04.518900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.532611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.532629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.545884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.545902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.559515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.559534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.573241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.573260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.586646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.586666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.600358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.600377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.614406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.614425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.628104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.628122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.641531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.641549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.655389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.655407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.669186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.669204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.682718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.682737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.696638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.696656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.710132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.710150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.723908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.723930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.736987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.737006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.750726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.750744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.764663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.764683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.214 [2024-12-11 09:47:04.778453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.214 [2024-12-11 09:47:04.778471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:04.792235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:04.792253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:04.805912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:04.805931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:04.819282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:04.819304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:04.832878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:04.832896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:04.846126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:04.846144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:04.859653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:04.859672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:04.873767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:04.873786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:04.886716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:04.886734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:04.900363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:04.900381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:04.913656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:04.913675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:04.927255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:04.927273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:04.941025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:04.941043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:04.954899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:04.954917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:04.968492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:04.968510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:04.982205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:04.982234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:04.996109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:04.996127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:05.009675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:05.009693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:05.023310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:05.023328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.473 [2024-12-11 09:47:05.036902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.473 [2024-12-11 09:47:05.036921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 [2024-12-11 09:47:05.051260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.051280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 [2024-12-11 09:47:05.062143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.062161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 [2024-12-11 09:47:05.076127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.076147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 [2024-12-11 09:47:05.089676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.089694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 [2024-12-11 09:47:05.103584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.103602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 [2024-12-11 09:47:05.116979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.116998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 [2024-12-11 09:47:05.130564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.130583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 [2024-12-11 09:47:05.144351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.144370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 17172.00 IOPS, 134.16 MiB/s [2024-12-11T08:47:05.306Z] [2024-12-11 09:47:05.157842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.157861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 [2024-12-11 09:47:05.171804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.171823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 [2024-12-11 09:47:05.185437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.185455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 [2024-12-11 09:47:05.198569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.198589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 [2024-12-11 09:47:05.212145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.212164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 [2024-12-11 09:47:05.225912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.225930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 [2024-12-11 09:47:05.238884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.238908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 [2024-12-11 09:47:05.252451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.252470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 [2024-12-11 09:47:05.266153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.266172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 [2024-12-11 09:47:05.279723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.279742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.731 [2024-12-11 09:47:05.293123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.731 [2024-12-11 09:47:05.293141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.306544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.306563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.320282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.320301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.333713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.333731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.347463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.347482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.361238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.361257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.375100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.375118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.388307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.388325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.401818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.401837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.415360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.415378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.429120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.429138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.442415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.442434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.456113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.456131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.469917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.469934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.483710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.483728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.496934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.496957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.510484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.510502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.524317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.524335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.538279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.538297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.990 [2024-12-11 09:47:05.551829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.990 [2024-12-11 09:47:05.551848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.565649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.565668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.579309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.579328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.592670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.592689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.606177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.606195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.619625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.619644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.633764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.633783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.647179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.647198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.660897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.660915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.674398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.674416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.688163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.688182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.701994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.702013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.715700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.715718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.729550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.729568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.742967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.742985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.756646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.756664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.770896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.770914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.782211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.782233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.796257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.796275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-12-11 09:47:05.810180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-12-11 09:47:05.810198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:05.823787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:05.823806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:05.837214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:05.837237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:05.850821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:05.850839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:05.864285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:05.864303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:05.877526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:05.877544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:05.891126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:05.891144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:05.904734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:05.904752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:05.918070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:05.918088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:05.931941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:05.931960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:05.945996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:05.946014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:05.959677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:05.959695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:05.973461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:05.973479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:05.987057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:05.987075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:06.000855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:06.000874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:06.014786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:06.014805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:06.028169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:06.028187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:06.041830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:06.041848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:06.056017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:06.056036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:06.066488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:06.066506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.508 [2024-12-11 09:47:06.080621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.508 [2024-12-11 09:47:06.080639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 [2024-12-11 09:47:06.094071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.094090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 [2024-12-11 09:47:06.107585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.107603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 [2024-12-11 09:47:06.120910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.120927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 [2024-12-11 09:47:06.134308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.134326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 [2024-12-11 09:47:06.148088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.148106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 17175.67 IOPS, 134.18 MiB/s [2024-12-11T08:47:06.342Z] [2024-12-11 09:47:06.161397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.161415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 [2024-12-11 09:47:06.175007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.175025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 [2024-12-11 09:47:06.188482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.188500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 [2024-12-11 09:47:06.202068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.202086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 [2024-12-11 09:47:06.215960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.215978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 [2024-12-11 09:47:06.229311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.229328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 [2024-12-11 09:47:06.242928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.242946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 [2024-12-11 09:47:06.256344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.256367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 [2024-12-11 09:47:06.269847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.269865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 [2024-12-11 09:47:06.283548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.283566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 [2024-12-11 09:47:06.297872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.297890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 [2024-12-11 09:47:06.311411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.311429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 [2024-12-11 09:47:06.325393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.325411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.767 [2024-12-11 09:47:06.338912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.767 [2024-12-11 09:47:06.338930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.352944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.352962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.366877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.366896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.380762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.380780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.394222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.394241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.407755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.407774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.421857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.421874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.435671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.435691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.449839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.449858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.463785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.463803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.477294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.477312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.490414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.490432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.503698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.503717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.517514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.517537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.531434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.531452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.544794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.544813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.558597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.558616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.572353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.572372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.585890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.585909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.026 [2024-12-11 09:47:06.599224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.026 [2024-12-11 09:47:06.599244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.612807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.612826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.626350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.626368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.639705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.639723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.653055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.653075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.666931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.666951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.680614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.680633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.694404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.694422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.708055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.708074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.721902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.721921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.735400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.735418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.748905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.748929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.762644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.762663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.776522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.776546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.790275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.790292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.804316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.804333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.814922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.814940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.829440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.829459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.843214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.843236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.285 [2024-12-11 09:47:06.857133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.285 [2024-12-11 09:47:06.857152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.544 [2024-12-11 09:47:06.870739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.544 [2024-12-11 09:47:06.870757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.544 [2024-12-11 09:47:06.884184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.544 [2024-12-11 09:47:06.884202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.544 [2024-12-11 09:47:06.897271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.544 [2024-12-11 09:47:06.897289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.544 [2024-12-11 09:47:06.910598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.544 [2024-12-11 09:47:06.910616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.544 [2024-12-11 09:47:06.924286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.544 [2024-12-11 09:47:06.924304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.544 [2024-12-11 09:47:06.937608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.544 [2024-12-11 09:47:06.937626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.544 [2024-12-11 09:47:06.951449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.544 [2024-12-11 09:47:06.951466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.544 [2024-12-11 09:47:06.965493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.544 [2024-12-11 09:47:06.965511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.544 [2024-12-11 09:47:06.979301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.544 [2024-12-11 09:47:06.979320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.544 [2024-12-11 09:47:06.992771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.544 [2024-12-11 09:47:06.992789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.544 [2024-12-11 09:47:07.006514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.544 [2024-12-11 09:47:07.006532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.544 [2024-12-11 09:47:07.019747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.544 [2024-12-11 09:47:07.019765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.544 [2024-12-11 09:47:07.033191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.544 [2024-12-11 09:47:07.033214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.544 [2024-12-11 09:47:07.046516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.544 [2024-12-11 09:47:07.046534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.545 [2024-12-11 09:47:07.060391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.545 [2024-12-11 09:47:07.060409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.545 [2024-12-11 09:47:07.074536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.545 [2024-12-11 09:47:07.074554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.545 [2024-12-11 09:47:07.088132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.545 [2024-12-11 09:47:07.088150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.545 [2024-12-11 09:47:07.101794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.545 [2024-12-11 09:47:07.101812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.545 [2024-12-11 09:47:07.115902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.545 [2024-12-11 09:47:07.115920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-12-11 09:47:07.129856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.129874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-12-11 09:47:07.143514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.143532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 17192.50 IOPS, 134.32 MiB/s [2024-12-11T08:47:07.378Z] [2024-12-11 09:47:07.157071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.157089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-12-11 09:47:07.170753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.170772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-12-11 09:47:07.184344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.184362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-12-11 09:47:07.197736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.197754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-12-11 09:47:07.211471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.211488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-12-11 09:47:07.225378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.225396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-12-11 09:47:07.239164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.239182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-12-11 09:47:07.252991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.253009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-12-11 09:47:07.266808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.266825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-12-11 09:47:07.280353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.280381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-12-11 09:47:07.293763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.293782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-12-11 09:47:07.307432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.307450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-12-11 09:47:07.321005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.321023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-12-11 09:47:07.334817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.334835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-12-11 09:47:07.348633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.348651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-12-11 09:47:07.362260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.362278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.803 [2024-12-11 09:47:07.375808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.803 [2024-12-11 09:47:07.375826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.061 [2024-12-11 09:47:07.389676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.061 [2024-12-11 09:47:07.389694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.061 [2024-12-11 09:47:07.403740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.061 [2024-12-11 09:47:07.403759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.061 [2024-12-11 09:47:07.417669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.061 [2024-12-11 09:47:07.417687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.061 [2024-12-11 09:47:07.431123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.061 [2024-12-11 09:47:07.431142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.061 [2024-12-11 09:47:07.444758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.061 [2024-12-11 09:47:07.444777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.061 [2024-12-11 09:47:07.458672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.061 [2024-12-11 09:47:07.458691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.061 [2024-12-11 09:47:07.472827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.061 [2024-12-11 09:47:07.472845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.061 [2024-12-11 09:47:07.486504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.061 [2024-12-11 09:47:07.486521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.061 [2024-12-11 09:47:07.500793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.061 [2024-12-11 09:47:07.500811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.061 [2024-12-11 09:47:07.512433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.061 [2024-12-11 09:47:07.512451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.061 [2024-12-11 09:47:07.527194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.061 [2024-12-11 09:47:07.527212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.061 [2024-12-11 09:47:07.540690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.061 [2024-12-11 09:47:07.540708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.061 [2024-12-11 09:47:07.554445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.061 [2024-12-11 09:47:07.554462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.061 [2024-12-11 09:47:07.568366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.061 [2024-12-11 09:47:07.568384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.061 [2024-12-11 09:47:07.582037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.061 [2024-12-11 09:47:07.582055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.061 [2024-12-11 09:47:07.596409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.061 [2024-12-11 09:47:07.596428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.061 [2024-12-11 09:47:07.609791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.061 [2024-12-11 09:47:07.609810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.061 [2024-12-11 09:47:07.623352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.061 [2024-12-11 09:47:07.623369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.320 [2024-12-11 09:47:07.637097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.320 [2024-12-11 09:47:07.637116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.320 [2024-12-11 09:47:07.650871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.320 [2024-12-11 09:47:07.650888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.320 [2024-12-11 09:47:07.664205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.320 [2024-12-11 09:47:07.664230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.320 [2024-12-11 09:47:07.677610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.320 [2024-12-11 09:47:07.677629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.320 [2024-12-11 09:47:07.691562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.320 [2024-12-11 09:47:07.691581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.320 [2024-12-11 09:47:07.705238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.320 [2024-12-11 09:47:07.705257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.320 [2024-12-11 09:47:07.718741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.320 [2024-12-11 09:47:07.718759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.320 [2024-12-11 09:47:07.732285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.320 [2024-12-11 09:47:07.732303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.320 [2024-12-11 09:47:07.745776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.320 [2024-12-11 09:47:07.745794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.320 [2024-12-11 09:47:07.758949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.320 [2024-12-11 09:47:07.758967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.321 [2024-12-11 09:47:07.772821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.321 [2024-12-11 09:47:07.772839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.321 [2024-12-11 09:47:07.787200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.321 [2024-12-11 09:47:07.787225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.321 [2024-12-11 09:47:07.801044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.321 [2024-12-11 09:47:07.801062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.321 [2024-12-11 09:47:07.814828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.321 [2024-12-11 09:47:07.814848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.321 [2024-12-11 09:47:07.828619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.321 [2024-12-11 09:47:07.828638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.321 [2024-12-11 09:47:07.842212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.321 [2024-12-11 09:47:07.842237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.321 [2024-12-11 09:47:07.855953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.321 [2024-12-11 09:47:07.855972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.321 [2024-12-11 09:47:07.869895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.321 [2024-12-11 09:47:07.869913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.321 [2024-12-11 09:47:07.883653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.321 [2024-12-11 09:47:07.883671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.579 [2024-12-11 09:47:07.897362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.579 [2024-12-11 09:47:07.897381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.579 [2024-12-11 09:47:07.910917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.579 [2024-12-11 09:47:07.910935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.579 [2024-12-11 09:47:07.925098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.579 [2024-12-11 09:47:07.925116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.579 [2024-12-11 09:47:07.938246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.579 [2024-12-11 09:47:07.938265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.579 [2024-12-11 09:47:07.951655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.579 [2024-12-11 09:47:07.951674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.579 [2024-12-11 09:47:07.965038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.579 [2024-12-11 09:47:07.965056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.579 [2024-12-11 09:47:07.978902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.579 [2024-12-11 09:47:07.978920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.579 [2024-12-11 09:47:07.992443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.579 [2024-12-11 09:47:07.992463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.580 [2024-12-11 09:47:08.005886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.580 [2024-12-11 09:47:08.005904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.580 [2024-12-11 09:47:08.019633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.580 [2024-12-11 09:47:08.019652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.580 [2024-12-11 09:47:08.033373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.580 [2024-12-11 09:47:08.033392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.580 [2024-12-11 09:47:08.047182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.580 [2024-12-11 09:47:08.047200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.580 [2024-12-11 09:47:08.060336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.580 [2024-12-11 09:47:08.060364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.580 [2024-12-11 09:47:08.074193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.580 [2024-12-11 09:47:08.074212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.580 [2024-12-11 09:47:08.087980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.580 [2024-12-11 09:47:08.087998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.580 [2024-12-11 09:47:08.101658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.580 [2024-12-11 09:47:08.101676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.580 [2024-12-11 09:47:08.115626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.580 [2024-12-11 09:47:08.115645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.580 [2024-12-11 09:47:08.129485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.580 [2024-12-11 09:47:08.129505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.580 [2024-12-11 09:47:08.143184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.580 [2024-12-11 09:47:08.143203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.838 17191.60 IOPS, 134.31 MiB/s [2024-12-11T08:47:08.413Z] [2024-12-11 09:47:08.156350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.838 [2024-12-11 09:47:08.156369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.838 00:08:58.838 Latency(us) 00:08:58.838 [2024-12-11T08:47:08.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.838 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:58.838 Nvme1n1 : 5.01 17194.94 134.34 0.00 0.00 7436.91 3308.01 14979.66 00:08:58.838 [2024-12-11T08:47:08.413Z] =================================================================================================================== 00:08:58.838 [2024-12-11T08:47:08.413Z] Total : 17194.94 134.34 0.00 0.00 7436.91 3308.01 14979.66 00:08:58.838 [2024-12-11 09:47:08.165340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.838 [2024-12-11 09:47:08.165357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.838 [2024-12-11 09:47:08.177366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.838 [2024-12-11 09:47:08.177380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.838 [2024-12-11 09:47:08.189415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.838 [2024-12-11 09:47:08.189435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.838 [2024-12-11 09:47:08.201434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.838 [2024-12-11 09:47:08.201449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.838 [2024-12-11 09:47:08.213466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.838 [2024-12-11 09:47:08.213480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.838 [2024-12-11 09:47:08.225493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.838 [2024-12-11 09:47:08.225508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.838 [2024-12-11 09:47:08.237526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.838 [2024-12-11 09:47:08.237540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.838 [2024-12-11 09:47:08.249556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.838 [2024-12-11 09:47:08.249570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.838 [2024-12-11 09:47:08.261592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.838 [2024-12-11 09:47:08.261615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.838 [2024-12-11 09:47:08.273618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.838 [2024-12-11 09:47:08.273628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.838 [2024-12-11 09:47:08.285653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.838 [2024-12-11 09:47:08.285664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.838 [2024-12-11 09:47:08.297683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.838 [2024-12-11 09:47:08.297694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.838 [2024-12-11 09:47:08.309713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.838 [2024-12-11 09:47:08.309722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4142796) - No such process 00:08:58.838 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4142796 00:08:58.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:58.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.839 delay0 00:08:58.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:58.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:59.097 [2024-12-11 09:47:08.465361] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:05.668 Initializing NVMe Controllers 00:09:05.668 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:05.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:05.668 Initialization complete. Launching workers. 00:09:05.668 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3850 00:09:05.668 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 4134, failed to submit 36 00:09:05.668 success 3937, unsuccessful 197, failed 0 00:09:05.668 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:05.668 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:05.668 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.668 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:05.668 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.668 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:05.668 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.669 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.669 rmmod nvme_tcp 00:09:05.669 rmmod nvme_fabrics 00:09:05.669 rmmod nvme_keyring 00:09:05.669 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.669 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:05.669 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:05.669 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 4140960 ']' 00:09:05.669 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 4140960 00:09:05.669 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 4140960 ']' 00:09:05.669 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 4140960 00:09:05.669 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:05.669 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.669 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4140960 00:09:05.669 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:05.669 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:05.669 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4140960' 00:09:05.669 killing process with pid 4140960 00:09:05.669 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 4140960 00:09:05.669 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 4140960 00:09:05.928 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:05.928 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:05.928 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:05.928 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:05.928 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:05.928 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:05.928 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:05.928 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:05.928 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:05.928 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.928 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.928 09:47:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.832 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:07.832 00:09:07.832 real 0m32.709s 00:09:07.832 user 0m42.914s 00:09:07.832 sys 0m11.945s 00:09:07.832 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.832 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.832 ************************************ 00:09:07.832 END TEST nvmf_zcopy 00:09:07.832 ************************************ 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.092 ************************************ 00:09:08.092 START TEST nvmf_nmic 00:09:08.092 ************************************ 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:08.092 * Looking for test storage... 00:09:08.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:08.092 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:08.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.093 --rc genhtml_branch_coverage=1 00:09:08.093 --rc genhtml_function_coverage=1 00:09:08.093 --rc genhtml_legend=1 00:09:08.093 --rc geninfo_all_blocks=1 00:09:08.093 --rc geninfo_unexecuted_blocks=1 00:09:08.093 00:09:08.093 ' 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:08.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.093 --rc genhtml_branch_coverage=1 00:09:08.093 --rc genhtml_function_coverage=1 00:09:08.093 --rc genhtml_legend=1 00:09:08.093 --rc geninfo_all_blocks=1 00:09:08.093 --rc geninfo_unexecuted_blocks=1 00:09:08.093 00:09:08.093 ' 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:08.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.093 --rc genhtml_branch_coverage=1 00:09:08.093 --rc genhtml_function_coverage=1 00:09:08.093 --rc genhtml_legend=1 00:09:08.093 --rc geninfo_all_blocks=1 00:09:08.093 --rc geninfo_unexecuted_blocks=1 00:09:08.093 00:09:08.093 ' 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:08.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.093 --rc genhtml_branch_coverage=1 00:09:08.093 --rc genhtml_function_coverage=1 00:09:08.093 --rc genhtml_legend=1 00:09:08.093 --rc geninfo_all_blocks=1 00:09:08.093 --rc geninfo_unexecuted_blocks=1 00:09:08.093 00:09:08.093 ' 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:08.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:08.093 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:08.352 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:08.352 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:08.352 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:08.352 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:08.352 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.352 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:08.352 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:08.352 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:08.352 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.352 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.352 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.352 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:08.352 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:08.352 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:08.352 09:47:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.921 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:14.922 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:14.922 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:14.922 Found net devices under 0000:af:00.0: cvl_0_0 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:14.922 Found net devices under 0000:af:00.1: cvl_0_1 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:14.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:09:14.922 00:09:14.922 --- 10.0.0.2 ping statistics --- 00:09:14.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.922 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:09:14.922 00:09:14.922 --- 10.0.0.1 ping statistics --- 00:09:14.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.922 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=4148631 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 4148631 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 4148631 ']' 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.922 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.922 [2024-12-11 09:47:24.471493] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:09:14.922 [2024-12-11 09:47:24.471543] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.181 [2024-12-11 09:47:24.557453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.182 [2024-12-11 09:47:24.599919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.182 [2024-12-11 09:47:24.599955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.182 [2024-12-11 09:47:24.599962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.182 [2024-12-11 09:47:24.599968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.182 [2024-12-11 09:47:24.599973] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.182 [2024-12-11 09:47:24.601577] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.182 [2024-12-11 09:47:24.601683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.182 [2024-12-11 09:47:24.601788] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.182 [2024-12-11 09:47:24.601789] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.182 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.182 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:15.182 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:15.182 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:15.182 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.182 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.182 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:15.182 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.182 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.182 [2024-12-11 09:47:24.739579] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.182 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.182 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:15.182 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.182 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.441 Malloc0 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.441 [2024-12-11 09:47:24.811033] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:15.441 test case1: single bdev can't be used in multiple subsystems 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.441 [2024-12-11 09:47:24.838954] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:15.441 [2024-12-11 09:47:24.838974] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:15.441 [2024-12-11 09:47:24.838981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.441 request: 00:09:15.441 { 00:09:15.441 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:15.441 "namespace": { 00:09:15.441 "bdev_name": "Malloc0", 00:09:15.441 "no_auto_visible": false, 00:09:15.441 "hide_metadata": false 00:09:15.441 }, 00:09:15.441 "method": "nvmf_subsystem_add_ns", 00:09:15.441 "req_id": 1 00:09:15.441 } 00:09:15.441 Got JSON-RPC error response 00:09:15.441 response: 00:09:15.441 { 00:09:15.441 "code": -32602, 00:09:15.441 "message": "Invalid parameters" 00:09:15.441 } 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:15.441 Adding namespace failed - expected result. 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:15.441 test case2: host connect to nvmf target in multiple paths 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.441 [2024-12-11 09:47:24.851093] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.441 09:47:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:16.593 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:17.968 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:17.968 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:17.968 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.968 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:17.968 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:19.882 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:19.882 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:19.882 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:19.882 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:19.882 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:19.882 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:19.882 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:19.882 [global] 00:09:19.882 thread=1 00:09:19.882 invalidate=1 00:09:19.882 rw=write 00:09:19.882 time_based=1 00:09:19.882 runtime=1 00:09:19.882 ioengine=libaio 00:09:19.882 direct=1 00:09:19.882 bs=4096 00:09:19.882 iodepth=1 00:09:19.882 norandommap=0 00:09:19.882 numjobs=1 00:09:19.882 00:09:19.882 verify_dump=1 00:09:19.882 verify_backlog=512 00:09:19.882 verify_state_save=0 00:09:19.882 do_verify=1 00:09:19.882 verify=crc32c-intel 00:09:19.882 [job0] 00:09:19.882 filename=/dev/nvme0n1 00:09:19.882 Could not set queue depth (nvme0n1) 00:09:20.139 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.139 fio-3.35 00:09:20.139 Starting 1 thread 00:09:21.510 00:09:21.510 job0: (groupid=0, jobs=1): err= 0: pid=4149701: Wed Dec 11 09:47:30 2024 00:09:21.510 read: IOPS=24, BW=98.3KiB/s (101kB/s)(100KiB/1017msec) 00:09:21.510 slat (nsec): min=8738, max=27715, avg=21457.32, stdev=4798.25 00:09:21.510 clat (usec): min=361, max=41114, avg=37677.59, stdev=11224.08 00:09:21.510 lat (usec): min=386, max=41135, avg=37699.05, stdev=11222.66 00:09:21.510 clat percentiles (usec): 00:09:21.510 | 1.00th=[ 363], 5.00th=[ 412], 10.00th=[40633], 20.00th=[40633], 00:09:21.510 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:21.510 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:21.510 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:21.510 | 99.99th=[41157] 00:09:21.510 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:09:21.510 slat (nsec): min=8874, max=45226, avg=10443.30, stdev=1943.43 00:09:21.510 clat (usec): min=114, max=363, avg=131.38, stdev=17.31 00:09:21.510 lat (usec): min=124, max=408, avg=141.82, stdev=18.43 00:09:21.510 clat percentiles (usec): 00:09:21.510 | 1.00th=[ 119], 5.00th=[ 121], 10.00th=[ 122], 20.00th=[ 123], 00:09:21.510 | 30.00th=[ 125], 40.00th=[ 126], 50.00th=[ 127], 60.00th=[ 128], 00:09:21.510 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 155], 95.00th=[ 163], 00:09:21.510 | 99.00th=[ 176], 99.50th=[ 217], 99.90th=[ 363], 99.95th=[ 363], 00:09:21.510 | 99.99th=[ 363] 00:09:21.510 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:21.510 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:21.510 lat (usec) : 250=95.16%, 500=0.56% 00:09:21.510 lat (msec) : 50=4.28% 00:09:21.511 cpu : usr=0.20%, sys=0.59%, ctx=537, majf=0, minf=1 00:09:21.511 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.511 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.511 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.511 00:09:21.511 Run status group 0 (all jobs): 00:09:21.511 READ: bw=98.3KiB/s (101kB/s), 98.3KiB/s-98.3KiB/s (101kB/s-101kB/s), io=100KiB (102kB), run=1017-1017msec 00:09:21.511 WRITE: bw=2014KiB/s (2062kB/s), 2014KiB/s-2014KiB/s (2062kB/s-2062kB/s), io=2048KiB (2097kB), run=1017-1017msec 00:09:21.511 00:09:21.511 Disk stats (read/write): 00:09:21.511 nvme0n1: ios=72/512, merge=0/0, ticks=840/66, in_queue=906, util=91.38% 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:21.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:21.511 rmmod nvme_tcp 00:09:21.511 rmmod nvme_fabrics 00:09:21.511 rmmod nvme_keyring 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 4148631 ']' 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 4148631 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 4148631 ']' 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 4148631 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.511 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4148631 00:09:21.511 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.511 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.511 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4148631' 00:09:21.511 killing process with pid 4148631 00:09:21.511 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 4148631 00:09:21.511 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 4148631 00:09:21.770 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:21.770 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:21.770 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:21.770 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:21.770 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:21.770 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:21.770 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:21.770 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:21.770 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:21.770 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.770 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.770 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:24.304 00:09:24.304 real 0m15.824s 00:09:24.304 user 0m34.106s 00:09:24.304 sys 0m5.818s 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.304 ************************************ 00:09:24.304 END TEST nvmf_nmic 00:09:24.304 ************************************ 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:24.304 ************************************ 00:09:24.304 START TEST nvmf_fio_target 00:09:24.304 ************************************ 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:24.304 * Looking for test storage... 00:09:24.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.304 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:24.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.305 --rc genhtml_branch_coverage=1 00:09:24.305 --rc genhtml_function_coverage=1 00:09:24.305 --rc genhtml_legend=1 00:09:24.305 --rc geninfo_all_blocks=1 00:09:24.305 --rc geninfo_unexecuted_blocks=1 00:09:24.305 00:09:24.305 ' 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:24.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.305 --rc genhtml_branch_coverage=1 00:09:24.305 --rc genhtml_function_coverage=1 00:09:24.305 --rc genhtml_legend=1 00:09:24.305 --rc geninfo_all_blocks=1 00:09:24.305 --rc geninfo_unexecuted_blocks=1 00:09:24.305 00:09:24.305 ' 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:24.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.305 --rc genhtml_branch_coverage=1 00:09:24.305 --rc genhtml_function_coverage=1 00:09:24.305 --rc genhtml_legend=1 00:09:24.305 --rc geninfo_all_blocks=1 00:09:24.305 --rc geninfo_unexecuted_blocks=1 00:09:24.305 00:09:24.305 ' 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:24.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.305 --rc genhtml_branch_coverage=1 00:09:24.305 --rc genhtml_function_coverage=1 00:09:24.305 --rc genhtml_legend=1 00:09:24.305 --rc geninfo_all_blocks=1 00:09:24.305 --rc geninfo_unexecuted_blocks=1 00:09:24.305 00:09:24.305 ' 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:24.305 09:47:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:30.877 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:30.877 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:30.877 Found net devices under 0000:af:00.0: cvl_0_0 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:30.877 Found net devices under 0000:af:00.1: cvl_0_1 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:30.877 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:30.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:09:30.878 00:09:30.878 --- 10.0.0.2 ping statistics --- 00:09:30.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.878 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:09:30.878 00:09:30.878 --- 10.0.0.1 ping statistics --- 00:09:30.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.878 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=4153936 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 4153936 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 4153936 ']' 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.878 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.878 [2024-12-11 09:47:40.385647] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:09:30.878 [2024-12-11 09:47:40.385689] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.136 [2024-12-11 09:47:40.468726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.136 [2024-12-11 09:47:40.511202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.136 [2024-12-11 09:47:40.511243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.136 [2024-12-11 09:47:40.511251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.136 [2024-12-11 09:47:40.511257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.136 [2024-12-11 09:47:40.511262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.136 [2024-12-11 09:47:40.512664] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.136 [2024-12-11 09:47:40.512772] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.136 [2024-12-11 09:47:40.512858] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.136 [2024-12-11 09:47:40.512858] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.701 09:47:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.701 09:47:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:31.701 09:47:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:31.701 09:47:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.701 09:47:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.701 09:47:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.701 09:47:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:31.958 [2024-12-11 09:47:41.436690] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.958 09:47:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:32.215 09:47:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:32.215 09:47:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:32.473 09:47:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:32.473 09:47:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:32.731 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:32.731 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:32.731 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:32.731 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:32.988 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.246 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:33.246 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.504 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:33.504 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.761 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:33.761 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:34.018 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:34.018 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:34.018 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:34.276 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:34.276 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:34.533 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.790 [2024-12-11 09:47:44.110669] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.790 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:34.790 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:35.047 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:36.417 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:36.417 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:36.417 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:36.417 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:36.417 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:36.417 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:38.314 09:47:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:38.314 09:47:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:38.314 09:47:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:38.314 09:47:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:38.314 09:47:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:38.314 09:47:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:38.314 09:47:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:38.314 [global] 00:09:38.314 thread=1 00:09:38.314 invalidate=1 00:09:38.314 rw=write 00:09:38.314 time_based=1 00:09:38.314 runtime=1 00:09:38.314 ioengine=libaio 00:09:38.314 direct=1 00:09:38.314 bs=4096 00:09:38.314 iodepth=1 00:09:38.314 norandommap=0 00:09:38.314 numjobs=1 00:09:38.314 00:09:38.314 verify_dump=1 00:09:38.314 verify_backlog=512 00:09:38.314 verify_state_save=0 00:09:38.314 do_verify=1 00:09:38.314 verify=crc32c-intel 00:09:38.314 [job0] 00:09:38.314 filename=/dev/nvme0n1 00:09:38.314 [job1] 00:09:38.314 filename=/dev/nvme0n2 00:09:38.314 [job2] 00:09:38.314 filename=/dev/nvme0n3 00:09:38.314 [job3] 00:09:38.314 filename=/dev/nvme0n4 00:09:38.314 Could not set queue depth (nvme0n1) 00:09:38.314 Could not set queue depth (nvme0n2) 00:09:38.314 Could not set queue depth (nvme0n3) 00:09:38.314 Could not set queue depth (nvme0n4) 00:09:38.571 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.571 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.571 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.571 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.571 fio-3.35 00:09:38.571 Starting 4 threads 00:09:39.941 00:09:39.941 job0: (groupid=0, jobs=1): err= 0: pid=4155283: Wed Dec 11 09:47:49 2024 00:09:39.941 read: IOPS=22, BW=88.7KiB/s (90.8kB/s)(92.0KiB/1037msec) 00:09:39.941 slat (nsec): min=9954, max=23459, avg=22212.43, stdev=2681.26 00:09:39.941 clat (usec): min=40658, max=42086, avg=41264.99, stdev=492.64 00:09:39.941 lat (usec): min=40668, max=42109, avg=41287.20, stdev=493.34 00:09:39.941 clat percentiles (usec): 00:09:39.941 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:39.941 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:39.941 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:39.941 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:39.941 | 99.99th=[42206] 00:09:39.941 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:09:39.941 slat (nsec): min=9496, max=63594, avg=11113.91, stdev=3545.18 00:09:39.941 clat (usec): min=123, max=269, avg=156.46, stdev=14.50 00:09:39.941 lat (usec): min=135, max=315, avg=167.57, stdev=16.07 00:09:39.941 clat percentiles (usec): 00:09:39.941 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:09:39.941 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:09:39.941 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 182], 00:09:39.941 | 99.00th=[ 194], 99.50th=[ 202], 99.90th=[ 269], 99.95th=[ 269], 00:09:39.941 | 99.99th=[ 269] 00:09:39.941 bw ( KiB/s): min= 4096, max= 4096, per=16.33%, avg=4096.00, stdev= 0.00, samples=1 00:09:39.941 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:39.941 lat (usec) : 250=95.51%, 500=0.19% 00:09:39.941 lat (msec) : 50=4.30% 00:09:39.941 cpu : usr=0.19%, sys=0.58%, ctx=536, majf=0, minf=1 00:09:39.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.941 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.941 job1: (groupid=0, jobs=1): err= 0: pid=4155287: Wed Dec 11 09:47:49 2024 00:09:39.941 read: IOPS=2126, BW=8507KiB/s (8712kB/s)(8516KiB/1001msec) 00:09:39.941 slat (nsec): min=7172, max=40773, avg=8174.29, stdev=1335.87 00:09:39.941 clat (usec): min=166, max=491, avg=229.16, stdev=29.13 00:09:39.941 lat (usec): min=173, max=499, avg=237.33, stdev=29.14 00:09:39.941 clat percentiles (usec): 00:09:39.941 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 202], 00:09:39.941 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 233], 60.00th=[ 239], 00:09:39.941 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 265], 00:09:39.941 | 99.00th=[ 314], 99.50th=[ 359], 99.90th=[ 482], 99.95th=[ 486], 00:09:39.941 | 99.99th=[ 490] 00:09:39.941 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:39.941 slat (usec): min=10, max=37714, avg=27.22, stdev=745.16 00:09:39.941 clat (usec): min=108, max=328, avg=160.49, stdev=29.41 00:09:39.941 lat (usec): min=122, max=37967, avg=187.71, stdev=747.56 00:09:39.941 clat percentiles (usec): 00:09:39.941 | 1.00th=[ 125], 5.00th=[ 131], 10.00th=[ 137], 20.00th=[ 141], 00:09:39.941 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 159], 00:09:39.941 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 190], 95.00th=[ 243], 00:09:39.941 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 297], 00:09:39.941 | 99.99th=[ 330] 00:09:39.941 bw ( KiB/s): min= 9960, max= 9960, per=39.71%, avg=9960.00, stdev= 0.00, samples=1 00:09:39.941 iops : min= 2490, max= 2490, avg=2490.00, stdev= 0.00, samples=1 00:09:39.941 lat (usec) : 250=88.08%, 500=11.92% 00:09:39.941 cpu : usr=4.20%, sys=7.40%, ctx=4692, majf=0, minf=1 00:09:39.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.941 issued rwts: total=2129,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.941 job2: (groupid=0, jobs=1): err= 0: pid=4155305: Wed Dec 11 09:47:49 2024 00:09:39.941 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:39.941 slat (nsec): min=2261, max=23568, avg=4051.17, stdev=2605.28 00:09:39.941 clat (usec): min=170, max=284, avg=212.79, stdev=16.12 00:09:39.941 lat (usec): min=173, max=288, avg=216.85, stdev=17.39 00:09:39.941 clat percentiles (usec): 00:09:39.941 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:09:39.941 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:09:39.941 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 233], 95.00th=[ 241], 00:09:39.941 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 281], 99.95th=[ 281], 00:09:39.941 | 99.99th=[ 285] 00:09:39.941 write: IOPS=2915, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec); 0 zone resets 00:09:39.941 slat (usec): min=3, max=677, avg= 6.47, stdev=13.26 00:09:39.941 clat (usec): min=104, max=1241, avg=143.44, stdev=36.14 00:09:39.941 lat (usec): min=108, max=1245, avg=149.91, stdev=40.19 00:09:39.941 clat percentiles (usec): 00:09:39.941 | 1.00th=[ 114], 5.00th=[ 118], 10.00th=[ 122], 20.00th=[ 127], 00:09:39.941 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 145], 00:09:39.941 | 70.00th=[ 151], 80.00th=[ 157], 90.00th=[ 169], 95.00th=[ 178], 00:09:39.941 | 99.00th=[ 194], 99.50th=[ 212], 99.90th=[ 865], 99.95th=[ 971], 00:09:39.941 | 99.99th=[ 1237] 00:09:39.941 bw ( KiB/s): min=12288, max=12288, per=49.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:39.941 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:39.941 lat (usec) : 250=98.69%, 500=1.22%, 750=0.04%, 1000=0.04% 00:09:39.941 lat (msec) : 2=0.02% 00:09:39.941 cpu : usr=2.30%, sys=4.20%, ctx=5481, majf=0, minf=1 00:09:39.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.941 issued rwts: total=2560,2918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.941 job3: (groupid=0, jobs=1): err= 0: pid=4155319: Wed Dec 11 09:47:49 2024 00:09:39.941 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:09:39.941 slat (nsec): min=9932, max=25267, avg=22864.81, stdev=3040.79 00:09:39.941 clat (usec): min=40893, max=41273, avg=40982.60, stdev=76.44 00:09:39.941 lat (usec): min=40917, max=41283, avg=41005.47, stdev=73.95 00:09:39.941 clat percentiles (usec): 00:09:39.941 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:39.941 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:39.941 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:39.941 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:39.941 | 99.99th=[41157] 00:09:39.941 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:39.941 slat (usec): min=10, max=37987, avg=86.55, stdev=1678.27 00:09:39.941 clat (usec): min=129, max=302, avg=182.17, stdev=36.23 00:09:39.941 lat (usec): min=141, max=38290, avg=268.72, stdev=1684.01 00:09:39.941 clat percentiles (usec): 00:09:39.941 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:09:39.941 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 174], 00:09:39.941 | 70.00th=[ 186], 80.00th=[ 237], 90.00th=[ 241], 95.00th=[ 243], 00:09:39.941 | 99.00th=[ 249], 99.50th=[ 262], 99.90th=[ 302], 99.95th=[ 302], 00:09:39.941 | 99.99th=[ 302] 00:09:39.941 bw ( KiB/s): min= 4096, max= 4096, per=16.33%, avg=4096.00, stdev= 0.00, samples=1 00:09:39.941 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:39.941 lat (usec) : 250=95.31%, 500=0.75% 00:09:39.941 lat (msec) : 50=3.94% 00:09:39.941 cpu : usr=0.60%, sys=0.70%, ctx=536, majf=0, minf=1 00:09:39.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.941 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.941 00:09:39.941 Run status group 0 (all jobs): 00:09:39.941 READ: bw=17.8MiB/s (18.7MB/s), 83.9KiB/s-9.99MiB/s (85.9kB/s-10.5MB/s), io=18.5MiB (19.4MB), run=1001-1037msec 00:09:39.941 WRITE: bw=24.5MiB/s (25.7MB/s), 1975KiB/s-11.4MiB/s (2022kB/s-11.9MB/s), io=25.4MiB (26.6MB), run=1001-1037msec 00:09:39.941 00:09:39.941 Disk stats (read/write): 00:09:39.941 nvme0n1: ios=72/512, merge=0/0, ticks=925/75, in_queue=1000, util=83.72% 00:09:39.941 nvme0n2: ios=1648/2048, merge=0/0, ticks=1256/306, in_queue=1562, util=90.79% 00:09:39.941 nvme0n3: ios=2110/2319, merge=0/0, ticks=578/318, in_queue=896, util=96.12% 00:09:39.941 nvme0n4: ios=36/512, merge=0/0, ticks=1485/89, in_queue=1574, util=99.77% 00:09:39.941 09:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:39.941 [global] 00:09:39.941 thread=1 00:09:39.941 invalidate=1 00:09:39.941 rw=randwrite 00:09:39.941 time_based=1 00:09:39.941 runtime=1 00:09:39.941 ioengine=libaio 00:09:39.941 direct=1 00:09:39.941 bs=4096 00:09:39.941 iodepth=1 00:09:39.941 norandommap=0 00:09:39.941 numjobs=1 00:09:39.941 00:09:39.941 verify_dump=1 00:09:39.941 verify_backlog=512 00:09:39.941 verify_state_save=0 00:09:39.941 do_verify=1 00:09:39.941 verify=crc32c-intel 00:09:39.941 [job0] 00:09:39.941 filename=/dev/nvme0n1 00:09:39.941 [job1] 00:09:39.941 filename=/dev/nvme0n2 00:09:39.941 [job2] 00:09:39.941 filename=/dev/nvme0n3 00:09:39.941 [job3] 00:09:39.941 filename=/dev/nvme0n4 00:09:40.198 Could not set queue depth (nvme0n1) 00:09:40.198 Could not set queue depth (nvme0n2) 00:09:40.198 Could not set queue depth (nvme0n3) 00:09:40.198 Could not set queue depth (nvme0n4) 00:09:40.454 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.454 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.454 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.454 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.454 fio-3.35 00:09:40.454 Starting 4 threads 00:09:41.822 00:09:41.822 job0: (groupid=0, jobs=1): err= 0: pid=4155773: Wed Dec 11 09:47:50 2024 00:09:41.822 read: IOPS=2235, BW=8943KiB/s (9158kB/s)(8952KiB/1001msec) 00:09:41.822 slat (nsec): min=7336, max=42263, avg=8485.84, stdev=1653.78 00:09:41.822 clat (usec): min=180, max=577, avg=234.75, stdev=40.67 00:09:41.822 lat (usec): min=188, max=588, avg=243.24, stdev=40.90 00:09:41.822 clat percentiles (usec): 00:09:41.822 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:09:41.822 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:09:41.822 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 277], 95.00th=[ 293], 00:09:41.822 | 99.00th=[ 457], 99.50th=[ 478], 99.90th=[ 537], 99.95th=[ 562], 00:09:41.822 | 99.99th=[ 578] 00:09:41.822 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:41.822 slat (nsec): min=5073, max=44269, avg=10745.05, stdev=2799.10 00:09:41.822 clat (usec): min=120, max=318, avg=161.37, stdev=16.98 00:09:41.822 lat (usec): min=125, max=331, avg=172.11, stdev=18.10 00:09:41.822 clat percentiles (usec): 00:09:41.822 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:09:41.822 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:09:41.822 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 190], 00:09:41.822 | 99.00th=[ 215], 99.50th=[ 239], 99.90th=[ 310], 99.95th=[ 314], 00:09:41.822 | 99.99th=[ 318] 00:09:41.822 bw ( KiB/s): min=11104, max=11104, per=30.99%, avg=11104.00, stdev= 0.00, samples=1 00:09:41.822 iops : min= 2776, max= 2776, avg=2776.00, stdev= 0.00, samples=1 00:09:41.822 lat (usec) : 250=91.20%, 500=8.71%, 750=0.08% 00:09:41.822 cpu : usr=3.90%, sys=7.30%, ctx=4800, majf=0, minf=1 00:09:41.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.822 issued rwts: total=2238,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.822 job1: (groupid=0, jobs=1): err= 0: pid=4155781: Wed Dec 11 09:47:50 2024 00:09:41.823 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:41.823 slat (nsec): min=3166, max=27678, avg=8473.94, stdev=1305.08 00:09:41.823 clat (usec): min=184, max=590, avg=287.57, stdev=48.90 00:09:41.823 lat (usec): min=192, max=602, avg=296.04, stdev=49.09 00:09:41.823 clat percentiles (usec): 00:09:41.823 | 1.00th=[ 196], 5.00th=[ 210], 10.00th=[ 225], 20.00th=[ 258], 00:09:41.823 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:09:41.823 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 359], 95.00th=[ 375], 00:09:41.823 | 99.00th=[ 433], 99.50th=[ 465], 99.90th=[ 529], 99.95th=[ 586], 00:09:41.823 | 99.99th=[ 594] 00:09:41.823 write: IOPS=2072, BW=8292KiB/s (8491kB/s)(8300KiB/1001msec); 0 zone resets 00:09:41.823 slat (nsec): min=3851, max=45862, avg=11761.28, stdev=2193.41 00:09:41.823 clat (usec): min=120, max=398, avg=171.39, stdev=27.78 00:09:41.823 lat (usec): min=125, max=430, avg=183.15, stdev=28.14 00:09:41.823 clat percentiles (usec): 00:09:41.823 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:09:41.823 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 172], 00:09:41.823 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 204], 95.00th=[ 237], 00:09:41.823 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 318], 99.95th=[ 326], 00:09:41.823 | 99.99th=[ 400] 00:09:41.823 bw ( KiB/s): min= 8440, max= 8440, per=23.56%, avg=8440.00, stdev= 0.00, samples=1 00:09:41.823 iops : min= 2110, max= 2110, avg=2110.00, stdev= 0.00, samples=1 00:09:41.823 lat (usec) : 250=57.05%, 500=42.81%, 750=0.15% 00:09:41.823 cpu : usr=3.40%, sys=6.70%, ctx=4124, majf=0, minf=1 00:09:41.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.823 issued rwts: total=2048,2075,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.823 job2: (groupid=0, jobs=1): err= 0: pid=4155799: Wed Dec 11 09:47:50 2024 00:09:41.823 read: IOPS=2089, BW=8360KiB/s (8560kB/s)(8368KiB/1001msec) 00:09:41.823 slat (nsec): min=7482, max=28051, avg=9013.16, stdev=1322.53 00:09:41.823 clat (usec): min=187, max=752, avg=234.04, stdev=26.84 00:09:41.823 lat (usec): min=195, max=760, avg=243.05, stdev=26.83 00:09:41.823 clat percentiles (usec): 00:09:41.823 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 221], 00:09:41.823 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 237], 00:09:41.823 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 260], 00:09:41.823 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[ 627], 99.95th=[ 668], 00:09:41.823 | 99.99th=[ 750] 00:09:41.823 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:41.823 slat (nsec): min=9401, max=38897, avg=12203.57, stdev=1861.85 00:09:41.823 clat (usec): min=121, max=4177, avg=174.04, stdev=122.52 00:09:41.823 lat (usec): min=133, max=4188, avg=186.25, stdev=122.67 00:09:41.823 clat percentiles (usec): 00:09:41.823 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 151], 00:09:41.823 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:09:41.823 | 70.00th=[ 174], 80.00th=[ 184], 90.00th=[ 204], 95.00th=[ 229], 00:09:41.823 | 99.00th=[ 269], 99.50th=[ 306], 99.90th=[ 2737], 99.95th=[ 3884], 00:09:41.823 | 99.99th=[ 4178] 00:09:41.823 bw ( KiB/s): min= 9632, max= 9632, per=26.88%, avg=9632.00, stdev= 0.00, samples=1 00:09:41.823 iops : min= 2408, max= 2408, avg=2408.00, stdev= 0.00, samples=1 00:09:41.823 lat (usec) : 250=92.33%, 500=7.48%, 750=0.11%, 1000=0.02% 00:09:41.823 lat (msec) : 4=0.04%, 10=0.02% 00:09:41.823 cpu : usr=2.40%, sys=5.70%, ctx=4653, majf=0, minf=1 00:09:41.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.823 issued rwts: total=2092,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.823 job3: (groupid=0, jobs=1): err= 0: pid=4155806: Wed Dec 11 09:47:50 2024 00:09:41.823 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:41.823 slat (nsec): min=7754, max=43374, avg=9150.72, stdev=1716.53 00:09:41.823 clat (usec): min=221, max=41031, avg=392.86, stdev=1794.65 00:09:41.823 lat (usec): min=230, max=41040, avg=402.01, stdev=1794.71 00:09:41.823 clat percentiles (usec): 00:09:41.823 | 1.00th=[ 237], 5.00th=[ 251], 10.00th=[ 260], 20.00th=[ 269], 00:09:41.823 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:09:41.823 | 70.00th=[ 318], 80.00th=[ 343], 90.00th=[ 375], 95.00th=[ 469], 00:09:41.823 | 99.00th=[ 545], 99.50th=[ 586], 99.90th=[40633], 99.95th=[41157], 00:09:41.823 | 99.99th=[41157] 00:09:41.823 write: IOPS=1769, BW=7077KiB/s (7247kB/s)(7084KiB/1001msec); 0 zone resets 00:09:41.823 slat (nsec): min=10740, max=92192, avg=13001.24, stdev=2850.18 00:09:41.823 clat (usec): min=128, max=412, avg=197.18, stdev=37.53 00:09:41.823 lat (usec): min=140, max=426, avg=210.18, stdev=37.66 00:09:41.823 clat percentiles (usec): 00:09:41.823 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 167], 00:09:41.823 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 196], 00:09:41.823 | 70.00th=[ 208], 80.00th=[ 229], 90.00th=[ 253], 95.00th=[ 273], 00:09:41.823 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 408], 99.95th=[ 412], 00:09:41.823 | 99.99th=[ 412] 00:09:41.823 bw ( KiB/s): min= 8192, max= 8192, per=22.86%, avg=8192.00, stdev= 0.00, samples=1 00:09:41.823 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:41.823 lat (usec) : 250=49.86%, 500=48.71%, 750=1.27% 00:09:41.823 lat (msec) : 2=0.03%, 10=0.03%, 50=0.09% 00:09:41.823 cpu : usr=3.20%, sys=5.40%, ctx=3308, majf=0, minf=1 00:09:41.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.823 issued rwts: total=1536,1771,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.823 00:09:41.823 Run status group 0 (all jobs): 00:09:41.823 READ: bw=30.9MiB/s (32.4MB/s), 6138KiB/s-8943KiB/s (6285kB/s-9158kB/s), io=30.9MiB (32.4MB), run=1001-1001msec 00:09:41.823 WRITE: bw=35.0MiB/s (36.7MB/s), 7077KiB/s-9.99MiB/s (7247kB/s-10.5MB/s), io=35.0MiB (36.7MB), run=1001-1001msec 00:09:41.823 00:09:41.823 Disk stats (read/write): 00:09:41.823 nvme0n1: ios=2025/2048, merge=0/0, ticks=730/308, in_queue=1038, util=96.69% 00:09:41.823 nvme0n2: ios=1585/2048, merge=0/0, ticks=600/322, in_queue=922, util=97.97% 00:09:41.823 nvme0n3: ios=1870/2048, merge=0/0, ticks=724/361, in_queue=1085, util=98.64% 00:09:41.823 nvme0n4: ios=1198/1536, merge=0/0, ticks=1435/288, in_queue=1723, util=97.36% 00:09:41.823 09:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:41.823 [global] 00:09:41.823 thread=1 00:09:41.823 invalidate=1 00:09:41.823 rw=write 00:09:41.823 time_based=1 00:09:41.823 runtime=1 00:09:41.823 ioengine=libaio 00:09:41.823 direct=1 00:09:41.823 bs=4096 00:09:41.823 iodepth=128 00:09:41.823 norandommap=0 00:09:41.823 numjobs=1 00:09:41.823 00:09:41.823 verify_dump=1 00:09:41.823 verify_backlog=512 00:09:41.823 verify_state_save=0 00:09:41.823 do_verify=1 00:09:41.823 verify=crc32c-intel 00:09:41.823 [job0] 00:09:41.823 filename=/dev/nvme0n1 00:09:41.823 [job1] 00:09:41.823 filename=/dev/nvme0n2 00:09:41.823 [job2] 00:09:41.823 filename=/dev/nvme0n3 00:09:41.823 [job3] 00:09:41.823 filename=/dev/nvme0n4 00:09:41.823 Could not set queue depth (nvme0n1) 00:09:41.823 Could not set queue depth (nvme0n2) 00:09:41.823 Could not set queue depth (nvme0n3) 00:09:41.823 Could not set queue depth (nvme0n4) 00:09:41.823 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.823 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.823 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.823 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.823 fio-3.35 00:09:41.823 Starting 4 threads 00:09:43.192 00:09:43.192 job0: (groupid=0, jobs=1): err= 0: pid=4156228: Wed Dec 11 09:47:52 2024 00:09:43.192 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:09:43.192 slat (nsec): min=1106, max=15198k, avg=105811.11, stdev=737284.61 00:09:43.192 clat (usec): min=4978, max=42005, avg=13150.16, stdev=5844.70 00:09:43.192 lat (usec): min=4982, max=42014, avg=13255.97, stdev=5919.59 00:09:43.192 clat percentiles (usec): 00:09:43.192 | 1.00th=[ 6325], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 9241], 00:09:43.193 | 30.00th=[ 9896], 40.00th=[10814], 50.00th=[11994], 60.00th=[12518], 00:09:43.193 | 70.00th=[13435], 80.00th=[14746], 90.00th=[24511], 95.00th=[25560], 00:09:43.193 | 99.00th=[34341], 99.50th=[36439], 99.90th=[39584], 99.95th=[42206], 00:09:43.193 | 99.99th=[42206] 00:09:43.193 write: IOPS=4229, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1006msec); 0 zone resets 00:09:43.193 slat (nsec): min=1794, max=15045k, avg=122622.56, stdev=651918.54 00:09:43.193 clat (usec): min=1012, max=63168, avg=17316.42, stdev=13231.94 00:09:43.193 lat (usec): min=1018, max=63178, avg=17439.04, stdev=13307.54 00:09:43.193 clat percentiles (usec): 00:09:43.193 | 1.00th=[ 3228], 5.00th=[ 4621], 10.00th=[ 5604], 20.00th=[ 7832], 00:09:43.193 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[11863], 60.00th=[12387], 00:09:43.193 | 70.00th=[20841], 80.00th=[28967], 90.00th=[38011], 95.00th=[46400], 00:09:43.193 | 99.00th=[58459], 99.50th=[58983], 99.90th=[61604], 99.95th=[62129], 00:09:43.193 | 99.99th=[63177] 00:09:43.193 bw ( KiB/s): min=12544, max=20480, per=22.93%, avg=16512.00, stdev=5611.60, samples=2 00:09:43.193 iops : min= 3136, max= 5120, avg=4128.00, stdev=1402.90, samples=2 00:09:43.193 lat (msec) : 2=0.14%, 4=1.69%, 10=32.64%, 20=42.50%, 50=21.47% 00:09:43.193 lat (msec) : 100=1.56% 00:09:43.193 cpu : usr=3.18%, sys=4.38%, ctx=427, majf=0, minf=2 00:09:43.193 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:43.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.193 issued rwts: total=4096,4255,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.193 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.193 job1: (groupid=0, jobs=1): err= 0: pid=4156238: Wed Dec 11 09:47:52 2024 00:09:43.193 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:09:43.193 slat (nsec): min=1382, max=13154k, avg=82951.91, stdev=594163.81 00:09:43.193 clat (usec): min=2953, max=47637, avg=11287.94, stdev=3938.31 00:09:43.193 lat (usec): min=2962, max=47639, avg=11370.90, stdev=3962.12 00:09:43.193 clat percentiles (usec): 00:09:43.193 | 1.00th=[ 4293], 5.00th=[ 7242], 10.00th=[ 8455], 20.00th=[ 8848], 00:09:43.193 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10421], 00:09:43.193 | 70.00th=[12387], 80.00th=[13566], 90.00th=[16188], 95.00th=[18744], 00:09:43.193 | 99.00th=[23725], 99.50th=[28705], 99.90th=[38536], 99.95th=[43254], 00:09:43.193 | 99.99th=[47449] 00:09:43.193 write: IOPS=5244, BW=20.5MiB/s (21.5MB/s)(20.6MiB/1008msec); 0 zone resets 00:09:43.193 slat (usec): min=2, max=41162, avg=88.10, stdev=799.55 00:09:43.193 clat (usec): min=809, max=85885, avg=11869.18, stdev=11454.22 00:09:43.193 lat (usec): min=1175, max=85898, avg=11957.28, stdev=11540.57 00:09:43.193 clat percentiles (usec): 00:09:43.193 | 1.00th=[ 2573], 5.00th=[ 4228], 10.00th=[ 6325], 20.00th=[ 7439], 00:09:43.193 | 30.00th=[ 8094], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9765], 00:09:43.193 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[16909], 95.00th=[28967], 00:09:43.193 | 99.00th=[79168], 99.50th=[80217], 99.90th=[85459], 99.95th=[85459], 00:09:43.193 | 99.99th=[85459] 00:09:43.193 bw ( KiB/s): min=16384, max=24888, per=28.65%, avg=20636.00, stdev=6013.24, samples=2 00:09:43.193 iops : min= 4096, max= 6222, avg=5159.00, stdev=1503.31, samples=2 00:09:43.193 lat (usec) : 1000=0.01% 00:09:43.193 lat (msec) : 2=0.27%, 4=2.24%, 10=61.26%, 20=30.56%, 50=4.29% 00:09:43.193 lat (msec) : 100=1.37% 00:09:43.193 cpu : usr=4.07%, sys=6.26%, ctx=682, majf=0, minf=1 00:09:43.193 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:43.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.193 issued rwts: total=5120,5286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.193 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.193 job2: (groupid=0, jobs=1): err= 0: pid=4156239: Wed Dec 11 09:47:52 2024 00:09:43.193 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:09:43.193 slat (nsec): min=1139, max=12257k, avg=83054.12, stdev=532683.31 00:09:43.193 clat (usec): min=2543, max=29083, avg=11541.22, stdev=3554.66 00:09:43.193 lat (usec): min=2550, max=29084, avg=11624.28, stdev=3571.97 00:09:43.193 clat percentiles (usec): 00:09:43.193 | 1.00th=[ 2606], 5.00th=[ 6587], 10.00th=[ 8455], 20.00th=[10028], 00:09:43.193 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11469], 60.00th=[11731], 00:09:43.193 | 70.00th=[11994], 80.00th=[12518], 90.00th=[14222], 95.00th=[18220], 00:09:43.193 | 99.00th=[25297], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:09:43.193 | 99.99th=[28967] 00:09:43.193 write: IOPS=5519, BW=21.6MiB/s (22.6MB/s)(21.6MiB/1003msec); 0 zone resets 00:09:43.193 slat (usec): min=2, max=7932, avg=89.79, stdev=474.67 00:09:43.193 clat (usec): min=355, max=73031, avg=12247.95, stdev=8339.64 00:09:43.193 lat (usec): min=503, max=73045, avg=12337.74, stdev=8382.92 00:09:43.193 clat percentiles (usec): 00:09:43.193 | 1.00th=[ 2278], 5.00th=[ 4752], 10.00th=[ 7308], 20.00th=[ 9634], 00:09:43.193 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:09:43.193 | 70.00th=[11731], 80.00th=[12125], 90.00th=[15401], 95.00th=[16909], 00:09:43.193 | 99.00th=[62653], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:09:43.193 | 99.99th=[72877] 00:09:43.193 bw ( KiB/s): min=20544, max=22720, per=30.04%, avg=21632.00, stdev=1538.66, samples=2 00:09:43.193 iops : min= 5136, max= 5680, avg=5408.00, stdev=384.67, samples=2 00:09:43.193 lat (usec) : 500=0.05% 00:09:43.193 lat (msec) : 2=0.40%, 4=3.37%, 10=17.26%, 20=75.45%, 50=2.58% 00:09:43.193 lat (msec) : 100=0.89% 00:09:43.193 cpu : usr=3.69%, sys=4.59%, ctx=568, majf=0, minf=1 00:09:43.193 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:43.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.193 issued rwts: total=5120,5536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.193 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.193 job3: (groupid=0, jobs=1): err= 0: pid=4156240: Wed Dec 11 09:47:52 2024 00:09:43.193 read: IOPS=2794, BW=10.9MiB/s (11.4MB/s)(11.0MiB/1004msec) 00:09:43.193 slat (nsec): min=1457, max=9657.4k, avg=122122.58, stdev=757095.51 00:09:43.193 clat (usec): min=2117, max=35711, avg=15023.29, stdev=3564.52 00:09:43.193 lat (usec): min=8313, max=41812, avg=15145.41, stdev=3636.65 00:09:43.193 clat percentiles (usec): 00:09:43.193 | 1.00th=[ 8717], 5.00th=[10290], 10.00th=[11469], 20.00th=[12518], 00:09:43.193 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14746], 60.00th=[15139], 00:09:43.193 | 70.00th=[15533], 80.00th=[16319], 90.00th=[18482], 95.00th=[21365], 00:09:43.193 | 99.00th=[30278], 99.50th=[33424], 99.90th=[35914], 99.95th=[35914], 00:09:43.193 | 99.99th=[35914] 00:09:43.193 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:09:43.193 slat (usec): min=2, max=40800, avg=205.68, stdev=1151.52 00:09:43.193 clat (usec): min=7119, max=59746, avg=25128.76, stdev=14165.40 00:09:43.193 lat (usec): min=7129, max=72053, avg=25334.44, stdev=14290.95 00:09:43.193 clat percentiles (usec): 00:09:43.193 | 1.00th=[10028], 5.00th=[10421], 10.00th=[11207], 20.00th=[13435], 00:09:43.193 | 30.00th=[14877], 40.00th=[15533], 50.00th=[19530], 60.00th=[21103], 00:09:43.193 | 70.00th=[33424], 80.00th=[42206], 90.00th=[47973], 95.00th=[50594], 00:09:43.193 | 99.00th=[56886], 99.50th=[56886], 99.90th=[59507], 99.95th=[59507], 00:09:43.193 | 99.99th=[59507] 00:09:43.193 bw ( KiB/s): min=11656, max=12920, per=17.06%, avg=12288.00, stdev=893.78, samples=2 00:09:43.193 iops : min= 2914, max= 3230, avg=3072.00, stdev=223.45, samples=2 00:09:43.193 lat (msec) : 4=0.02%, 10=2.36%, 20=70.11%, 50=24.06%, 100=3.45% 00:09:43.193 cpu : usr=2.59%, sys=5.28%, ctx=287, majf=0, minf=1 00:09:43.193 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:43.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.193 issued rwts: total=2806,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.193 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.193 00:09:43.193 Run status group 0 (all jobs): 00:09:43.193 READ: bw=66.4MiB/s (69.7MB/s), 10.9MiB/s-19.9MiB/s (11.4MB/s-20.9MB/s), io=67.0MiB (70.2MB), run=1003-1008msec 00:09:43.193 WRITE: bw=70.3MiB/s (73.7MB/s), 12.0MiB/s-21.6MiB/s (12.5MB/s-22.6MB/s), io=70.9MiB (74.3MB), run=1003-1008msec 00:09:43.193 00:09:43.193 Disk stats (read/write): 00:09:43.193 nvme0n1: ios=3634/3711, merge=0/0, ticks=25938/32556, in_queue=58494, util=85.97% 00:09:43.193 nvme0n2: ios=4148/4119, merge=0/0, ticks=43205/43382, in_queue=86587, util=94.39% 00:09:43.193 nvme0n3: ios=4375/4608, merge=0/0, ticks=31070/35105, in_queue=66175, util=97.80% 00:09:43.193 nvme0n4: ios=2382/2560, merge=0/0, ticks=18107/30512, in_queue=48619, util=100.00% 00:09:43.193 09:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:43.193 [global] 00:09:43.193 thread=1 00:09:43.193 invalidate=1 00:09:43.193 rw=randwrite 00:09:43.193 time_based=1 00:09:43.193 runtime=1 00:09:43.193 ioengine=libaio 00:09:43.193 direct=1 00:09:43.193 bs=4096 00:09:43.193 iodepth=128 00:09:43.193 norandommap=0 00:09:43.193 numjobs=1 00:09:43.193 00:09:43.193 verify_dump=1 00:09:43.193 verify_backlog=512 00:09:43.193 verify_state_save=0 00:09:43.193 do_verify=1 00:09:43.193 verify=crc32c-intel 00:09:43.193 [job0] 00:09:43.193 filename=/dev/nvme0n1 00:09:43.193 [job1] 00:09:43.193 filename=/dev/nvme0n2 00:09:43.193 [job2] 00:09:43.193 filename=/dev/nvme0n3 00:09:43.193 [job3] 00:09:43.193 filename=/dev/nvme0n4 00:09:43.193 Could not set queue depth (nvme0n1) 00:09:43.193 Could not set queue depth (nvme0n2) 00:09:43.193 Could not set queue depth (nvme0n3) 00:09:43.193 Could not set queue depth (nvme0n4) 00:09:43.450 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.450 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.450 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.450 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.450 fio-3.35 00:09:43.450 Starting 4 threads 00:09:44.820 00:09:44.820 job0: (groupid=0, jobs=1): err= 0: pid=4156613: Wed Dec 11 09:47:54 2024 00:09:44.820 read: IOPS=4550, BW=17.8MiB/s (18.6MB/s)(17.9MiB/1007msec) 00:09:44.820 slat (nsec): min=1093, max=16474k, avg=109419.58, stdev=754733.37 00:09:44.820 clat (usec): min=1101, max=47472, avg=14202.04, stdev=6643.95 00:09:44.820 lat (usec): min=1109, max=47496, avg=14311.46, stdev=6704.05 00:09:44.820 clat percentiles (usec): 00:09:44.820 | 1.00th=[ 4228], 5.00th=[ 6915], 10.00th=[ 8160], 20.00th=[ 9372], 00:09:44.820 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11600], 60.00th=[13566], 00:09:44.820 | 70.00th=[15795], 80.00th=[18744], 90.00th=[24511], 95.00th=[26870], 00:09:44.820 | 99.00th=[36439], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:44.820 | 99.99th=[47449] 00:09:44.820 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:09:44.820 slat (nsec): min=1885, max=11132k, avg=97192.13, stdev=607672.67 00:09:44.820 clat (usec): min=936, max=41137, avg=13581.52, stdev=6568.86 00:09:44.820 lat (usec): min=1049, max=41144, avg=13678.71, stdev=6619.88 00:09:44.820 clat percentiles (usec): 00:09:44.820 | 1.00th=[ 3326], 5.00th=[ 5604], 10.00th=[ 7373], 20.00th=[ 8848], 00:09:44.820 | 30.00th=[10028], 40.00th=[11076], 50.00th=[11338], 60.00th=[12911], 00:09:44.820 | 70.00th=[15139], 80.00th=[17957], 90.00th=[24773], 95.00th=[26870], 00:09:44.820 | 99.00th=[34341], 99.50th=[34866], 99.90th=[36963], 99.95th=[36963], 00:09:44.820 | 99.99th=[41157] 00:09:44.820 bw ( KiB/s): min=16384, max=20480, per=26.29%, avg=18432.00, stdev=2896.31, samples=2 00:09:44.820 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:44.820 lat (usec) : 1000=0.01% 00:09:44.820 lat (msec) : 2=0.09%, 4=0.88%, 10=26.53%, 20=57.32%, 50=15.17% 00:09:44.820 cpu : usr=2.98%, sys=4.77%, ctx=406, majf=0, minf=1 00:09:44.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:44.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.820 issued rwts: total=4582,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.820 job1: (groupid=0, jobs=1): err= 0: pid=4156615: Wed Dec 11 09:47:54 2024 00:09:44.820 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:09:44.820 slat (nsec): min=1305, max=15252k, avg=111499.99, stdev=685916.57 00:09:44.820 clat (usec): min=5608, max=39486, avg=13305.80, stdev=6660.68 00:09:44.820 lat (usec): min=5612, max=39514, avg=13417.30, stdev=6710.11 00:09:44.820 clat percentiles (usec): 00:09:44.820 | 1.00th=[ 6325], 5.00th=[ 7504], 10.00th=[ 8160], 20.00th=[ 8848], 00:09:44.820 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[11076], 60.00th=[11600], 00:09:44.820 | 70.00th=[13042], 80.00th=[15401], 90.00th=[23987], 95.00th=[31589], 00:09:44.820 | 99.00th=[36963], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:09:44.820 | 99.99th=[39584] 00:09:44.820 write: IOPS=4399, BW=17.2MiB/s (18.0MB/s)(17.2MiB/1002msec); 0 zone resets 00:09:44.820 slat (nsec): min=1977, max=20032k, avg=117582.88, stdev=763296.09 00:09:44.820 clat (usec): min=1493, max=50231, avg=16156.63, stdev=8908.99 00:09:44.820 lat (usec): min=1497, max=50249, avg=16274.21, stdev=8972.14 00:09:44.820 clat percentiles (usec): 00:09:44.820 | 1.00th=[ 4178], 5.00th=[ 6783], 10.00th=[ 7898], 20.00th=[ 8291], 00:09:44.820 | 30.00th=[ 9896], 40.00th=[10945], 50.00th=[11994], 60.00th=[15664], 00:09:44.820 | 70.00th=[19530], 80.00th=[24511], 90.00th=[29230], 95.00th=[34341], 00:09:44.820 | 99.00th=[41681], 99.50th=[43779], 99.90th=[45351], 99.95th=[45351], 00:09:44.820 | 99.99th=[50070] 00:09:44.820 bw ( KiB/s): min=15336, max=15336, per=21.87%, avg=15336.00, stdev= 0.00, samples=1 00:09:44.820 iops : min= 3834, max= 3834, avg=3834.00, stdev= 0.00, samples=1 00:09:44.820 lat (msec) : 2=0.22%, 10=32.47%, 20=45.77%, 50=21.53%, 100=0.01% 00:09:44.820 cpu : usr=3.50%, sys=4.60%, ctx=503, majf=0, minf=1 00:09:44.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:44.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.820 issued rwts: total=4096,4408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.820 job2: (groupid=0, jobs=1): err= 0: pid=4156619: Wed Dec 11 09:47:54 2024 00:09:44.820 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.1MiB/1011msec) 00:09:44.820 slat (nsec): min=1532, max=13852k, avg=120795.24, stdev=851874.12 00:09:44.820 clat (usec): min=4830, max=41259, avg=14066.99, stdev=4924.19 00:09:44.820 lat (usec): min=4843, max=41261, avg=14187.78, stdev=4983.04 00:09:44.820 clat percentiles (usec): 00:09:44.820 | 1.00th=[ 5473], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[10814], 00:09:44.820 | 30.00th=[12125], 40.00th=[12911], 50.00th=[13173], 60.00th=[13698], 00:09:44.820 | 70.00th=[14222], 80.00th=[16057], 90.00th=[19006], 95.00th=[23200], 00:09:44.820 | 99.00th=[38011], 99.50th=[39584], 99.90th=[41157], 99.95th=[41157], 00:09:44.820 | 99.99th=[41157] 00:09:44.820 write: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:09:44.820 slat (usec): min=2, max=10094, avg=132.50, stdev=591.09 00:09:44.820 clat (usec): min=3568, max=53759, avg=18914.65, stdev=10176.90 00:09:44.820 lat (usec): min=3578, max=53771, avg=19047.15, stdev=10240.40 00:09:44.820 clat percentiles (usec): 00:09:44.820 | 1.00th=[ 4015], 5.00th=[ 6915], 10.00th=[ 8979], 20.00th=[11076], 00:09:44.820 | 30.00th=[11600], 40.00th=[12780], 50.00th=[13304], 60.00th=[19792], 00:09:44.820 | 70.00th=[24511], 80.00th=[29230], 90.00th=[33162], 95.00th=[36439], 00:09:44.820 | 99.00th=[49021], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:09:44.820 | 99.99th=[53740] 00:09:44.820 bw ( KiB/s): min=15040, max=16880, per=22.76%, avg=15960.00, stdev=1301.08, samples=2 00:09:44.820 iops : min= 3760, max= 4220, avg=3990.00, stdev=325.27, samples=2 00:09:44.820 lat (msec) : 4=0.47%, 10=12.43%, 20=62.72%, 50=23.90%, 100=0.48% 00:09:44.820 cpu : usr=2.48%, sys=5.84%, ctx=496, majf=0, minf=1 00:09:44.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:44.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.820 issued rwts: total=3606,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.821 job3: (groupid=0, jobs=1): err= 0: pid=4156621: Wed Dec 11 09:47:54 2024 00:09:44.821 read: IOPS=4361, BW=17.0MiB/s (17.9MB/s)(17.1MiB/1005msec) 00:09:44.821 slat (nsec): min=1038, max=17202k, avg=102003.96, stdev=662957.88 00:09:44.821 clat (usec): min=833, max=28760, avg=13302.46, stdev=3748.10 00:09:44.821 lat (usec): min=3054, max=28766, avg=13404.47, stdev=3752.21 00:09:44.821 clat percentiles (usec): 00:09:44.821 | 1.00th=[ 5800], 5.00th=[ 6915], 10.00th=[ 9765], 20.00th=[11207], 00:09:44.821 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12911], 60.00th=[13435], 00:09:44.821 | 70.00th=[13960], 80.00th=[14615], 90.00th=[17433], 95.00th=[20317], 00:09:44.821 | 99.00th=[26870], 99.50th=[28705], 99.90th=[28705], 99.95th=[28705], 00:09:44.821 | 99.99th=[28705] 00:09:44.821 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:09:44.821 slat (nsec): min=1759, max=21338k, avg=114859.67, stdev=745002.49 00:09:44.821 clat (usec): min=1032, max=49986, avg=14939.51, stdev=7879.76 00:09:44.821 lat (usec): min=1054, max=50014, avg=15054.37, stdev=7933.51 00:09:44.821 clat percentiles (usec): 00:09:44.821 | 1.00th=[ 4228], 5.00th=[ 7504], 10.00th=[ 9503], 20.00th=[10683], 00:09:44.821 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12387], 60.00th=[13173], 00:09:44.821 | 70.00th=[14091], 80.00th=[17957], 90.00th=[26608], 95.00th=[35390], 00:09:44.821 | 99.00th=[48497], 99.50th=[48497], 99.90th=[49021], 99.95th=[49021], 00:09:44.821 | 99.99th=[50070] 00:09:44.821 bw ( KiB/s): min=16384, max=20480, per=26.29%, avg=18432.00, stdev=2896.31, samples=2 00:09:44.821 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:44.821 lat (usec) : 1000=0.01% 00:09:44.821 lat (msec) : 2=0.02%, 4=0.67%, 10=11.19%, 20=77.37%, 50=10.74% 00:09:44.821 cpu : usr=1.79%, sys=4.68%, ctx=416, majf=0, minf=2 00:09:44.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:44.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.821 issued rwts: total=4383,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.821 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.821 00:09:44.821 Run status group 0 (all jobs): 00:09:44.821 READ: bw=64.4MiB/s (67.5MB/s), 13.9MiB/s-17.8MiB/s (14.6MB/s-18.6MB/s), io=65.1MiB (68.3MB), run=1002-1011msec 00:09:44.821 WRITE: bw=68.5MiB/s (71.8MB/s), 15.8MiB/s-17.9MiB/s (16.6MB/s-18.8MB/s), io=69.2MiB (72.6MB), run=1002-1011msec 00:09:44.821 00:09:44.821 Disk stats (read/write): 00:09:44.821 nvme0n1: ios=4116/4253, merge=0/0, ticks=37713/41380, in_queue=79093, util=84.87% 00:09:44.821 nvme0n2: ios=3195/3584, merge=0/0, ticks=21609/30153, in_queue=51762, util=90.03% 00:09:44.821 nvme0n3: ios=3255/3584, merge=0/0, ticks=44818/60007, in_queue=104825, util=93.19% 00:09:44.821 nvme0n4: ios=3641/3688, merge=0/0, ticks=18141/22238, in_queue=40379, util=95.35% 00:09:44.821 09:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:44.821 09:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4156844 00:09:44.821 09:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:44.821 09:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:44.821 [global] 00:09:44.821 thread=1 00:09:44.821 invalidate=1 00:09:44.821 rw=read 00:09:44.821 time_based=1 00:09:44.821 runtime=10 00:09:44.821 ioengine=libaio 00:09:44.821 direct=1 00:09:44.821 bs=4096 00:09:44.821 iodepth=1 00:09:44.821 norandommap=1 00:09:44.821 numjobs=1 00:09:44.821 00:09:44.821 [job0] 00:09:44.821 filename=/dev/nvme0n1 00:09:44.821 [job1] 00:09:44.821 filename=/dev/nvme0n2 00:09:44.821 [job2] 00:09:44.821 filename=/dev/nvme0n3 00:09:44.821 [job3] 00:09:44.821 filename=/dev/nvme0n4 00:09:44.821 Could not set queue depth (nvme0n1) 00:09:44.821 Could not set queue depth (nvme0n2) 00:09:44.821 Could not set queue depth (nvme0n3) 00:09:44.821 Could not set queue depth (nvme0n4) 00:09:45.078 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.078 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.078 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.078 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.078 fio-3.35 00:09:45.078 Starting 4 threads 00:09:48.350 09:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:48.350 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=12083200, buflen=4096 00:09:48.350 fio: pid=4156987, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:48.350 09:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:48.350 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=692224, buflen=4096 00:09:48.350 fio: pid=4156986, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:48.350 09:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:48.350 09:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:48.350 09:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:48.350 09:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:48.350 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=315392, buflen=4096 00:09:48.350 fio: pid=4156984, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:48.607 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=29810688, buflen=4096 00:09:48.607 fio: pid=4156985, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:48.607 09:47:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:48.607 09:47:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:48.607 00:09:48.607 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4156984: Wed Dec 11 09:47:58 2024 00:09:48.607 read: IOPS=24, BW=97.4KiB/s (99.7kB/s)(308KiB/3163msec) 00:09:48.607 slat (usec): min=7, max=18797, avg=260.48, stdev=2126.14 00:09:48.607 clat (usec): min=382, max=42925, avg=40533.95, stdev=4645.91 00:09:48.607 lat (usec): min=414, max=60036, avg=40797.58, stdev=5147.68 00:09:48.607 clat percentiles (usec): 00:09:48.607 | 1.00th=[ 383], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:48.607 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:48.607 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:48.607 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:48.607 | 99.99th=[42730] 00:09:48.607 bw ( KiB/s): min= 94, max= 104, per=0.78%, avg=97.00, stdev= 3.52, samples=6 00:09:48.607 iops : min= 23, max= 26, avg=24.17, stdev= 0.98, samples=6 00:09:48.607 lat (usec) : 500=1.28% 00:09:48.607 lat (msec) : 50=97.44% 00:09:48.607 cpu : usr=0.09%, sys=0.00%, ctx=80, majf=0, minf=1 00:09:48.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:48.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.607 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.607 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:48.607 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4156985: Wed Dec 11 09:47:58 2024 00:09:48.607 read: IOPS=2172, BW=8690KiB/s (8899kB/s)(28.4MiB/3350msec) 00:09:48.607 slat (usec): min=5, max=11139, avg=14.00, stdev=223.38 00:09:48.607 clat (usec): min=154, max=42451, avg=441.55, stdev=3116.46 00:09:48.607 lat (usec): min=160, max=42460, avg=454.37, stdev=3123.46 00:09:48.607 clat percentiles (usec): 00:09:48.607 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 184], 00:09:48.607 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:09:48.607 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 223], 95.00th=[ 231], 00:09:48.607 | 99.00th=[ 281], 99.50th=[40633], 99.90th=[41681], 99.95th=[42206], 00:09:48.607 | 99.99th=[42206] 00:09:48.607 bw ( KiB/s): min= 168, max=18057, per=62.70%, avg=7842.83, stdev=8599.16, samples=6 00:09:48.607 iops : min= 42, max= 4514, avg=1960.67, stdev=2149.73, samples=6 00:09:48.607 lat (usec) : 250=98.30%, 500=1.06%, 750=0.01% 00:09:48.607 lat (msec) : 10=0.01%, 20=0.03%, 50=0.58% 00:09:48.607 cpu : usr=0.87%, sys=2.03%, ctx=7284, majf=0, minf=2 00:09:48.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:48.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.607 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.607 issued rwts: total=7279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:48.607 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4156986: Wed Dec 11 09:47:58 2024 00:09:48.607 read: IOPS=58, BW=231KiB/s (236kB/s)(676KiB/2930msec) 00:09:48.607 slat (nsec): min=7073, max=30549, avg=14250.25, stdev=7116.54 00:09:48.607 clat (usec): min=218, max=44996, avg=17193.79, stdev=20147.10 00:09:48.607 lat (usec): min=228, max=45009, avg=17207.98, stdev=20153.42 00:09:48.607 clat percentiles (usec): 00:09:48.607 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 247], 20.00th=[ 269], 00:09:48.607 | 30.00th=[ 281], 40.00th=[ 314], 50.00th=[ 383], 60.00th=[40633], 00:09:48.607 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:48.607 | 99.00th=[42206], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:09:48.607 | 99.99th=[44827] 00:09:48.607 bw ( KiB/s): min= 96, max= 880, per=2.03%, avg=254.40, stdev=349.74, samples=5 00:09:48.607 iops : min= 24, max= 220, avg=63.60, stdev=87.43, samples=5 00:09:48.607 lat (usec) : 250=11.18%, 500=47.06% 00:09:48.607 lat (msec) : 50=41.18% 00:09:48.607 cpu : usr=0.14%, sys=0.03%, ctx=170, majf=0, minf=2 00:09:48.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:48.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.608 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.608 issued rwts: total=170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:48.608 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4156987: Wed Dec 11 09:47:58 2024 00:09:48.608 read: IOPS=1083, BW=4332KiB/s (4436kB/s)(11.5MiB/2724msec) 00:09:48.608 slat (nsec): min=7245, max=47260, avg=8964.81, stdev=2666.15 00:09:48.608 clat (usec): min=194, max=41978, avg=905.15, stdev=5195.75 00:09:48.608 lat (usec): min=203, max=42000, avg=914.12, stdev=5197.46 00:09:48.608 clat percentiles (usec): 00:09:48.608 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:09:48.608 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 231], 00:09:48.608 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 265], 00:09:48.608 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:48.608 | 99.99th=[42206] 00:09:48.608 bw ( KiB/s): min= 96, max=16320, per=27.02%, avg=3379.20, stdev=7234.30, samples=5 00:09:48.608 iops : min= 24, max= 4080, avg=844.80, stdev=1808.57, samples=5 00:09:48.608 lat (usec) : 250=90.34%, 500=7.86%, 750=0.03% 00:09:48.608 lat (msec) : 4=0.07%, 50=1.66% 00:09:48.608 cpu : usr=0.40%, sys=1.98%, ctx=2954, majf=0, minf=2 00:09:48.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:48.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.608 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.608 issued rwts: total=2951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:48.608 00:09:48.608 Run status group 0 (all jobs): 00:09:48.608 READ: bw=12.2MiB/s (12.8MB/s), 97.4KiB/s-8690KiB/s (99.7kB/s-8899kB/s), io=40.9MiB (42.9MB), run=2724-3350msec 00:09:48.608 00:09:48.608 Disk stats (read/write): 00:09:48.608 nvme0n1: ios=75/0, merge=0/0, ticks=3041/0, in_queue=3041, util=94.98% 00:09:48.608 nvme0n2: ios=7279/0, merge=0/0, ticks=3169/0, in_queue=3169, util=94.80% 00:09:48.608 nvme0n3: ios=167/0, merge=0/0, ticks=2824/0, in_queue=2824, util=96.48% 00:09:48.608 nvme0n4: ios=2560/0, merge=0/0, ticks=3551/0, in_queue=3551, util=99.07% 00:09:48.865 09:47:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:48.865 09:47:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:49.121 09:47:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.121 09:47:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:49.121 09:47:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.121 09:47:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:49.378 09:47:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.378 09:47:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:49.635 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:49.635 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 4156844 00:09:49.635 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:49.635 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:49.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.635 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:49.635 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:49.635 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:49.635 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.635 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:49.635 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.635 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:49.635 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:49.635 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:49.635 nvmf hotplug test: fio failed as expected 00:09:49.636 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:49.893 rmmod nvme_tcp 00:09:49.893 rmmod nvme_fabrics 00:09:49.893 rmmod nvme_keyring 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 4153936 ']' 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 4153936 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 4153936 ']' 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 4153936 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:49.893 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4153936 00:09:50.152 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.152 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.152 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4153936' 00:09:50.152 killing process with pid 4153936 00:09:50.152 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 4153936 00:09:50.152 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 4153936 00:09:50.152 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:50.152 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:50.152 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:50.152 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:50.152 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:50.152 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:50.152 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:50.152 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:50.152 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:50.152 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.152 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.152 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:52.689 00:09:52.689 real 0m28.385s 00:09:52.689 user 1m50.857s 00:09:52.689 sys 0m9.197s 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.689 ************************************ 00:09:52.689 END TEST nvmf_fio_target 00:09:52.689 ************************************ 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:52.689 ************************************ 00:09:52.689 START TEST nvmf_bdevio 00:09:52.689 ************************************ 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:52.689 * Looking for test storage... 00:09:52.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:52.689 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:52.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.690 --rc genhtml_branch_coverage=1 00:09:52.690 --rc genhtml_function_coverage=1 00:09:52.690 --rc genhtml_legend=1 00:09:52.690 --rc geninfo_all_blocks=1 00:09:52.690 --rc geninfo_unexecuted_blocks=1 00:09:52.690 00:09:52.690 ' 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:52.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.690 --rc genhtml_branch_coverage=1 00:09:52.690 --rc genhtml_function_coverage=1 00:09:52.690 --rc genhtml_legend=1 00:09:52.690 --rc geninfo_all_blocks=1 00:09:52.690 --rc geninfo_unexecuted_blocks=1 00:09:52.690 00:09:52.690 ' 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:52.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.690 --rc genhtml_branch_coverage=1 00:09:52.690 --rc genhtml_function_coverage=1 00:09:52.690 --rc genhtml_legend=1 00:09:52.690 --rc geninfo_all_blocks=1 00:09:52.690 --rc geninfo_unexecuted_blocks=1 00:09:52.690 00:09:52.690 ' 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:52.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.690 --rc genhtml_branch_coverage=1 00:09:52.690 --rc genhtml_function_coverage=1 00:09:52.690 --rc genhtml_legend=1 00:09:52.690 --rc geninfo_all_blocks=1 00:09:52.690 --rc geninfo_unexecuted_blocks=1 00:09:52.690 00:09:52.690 ' 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.690 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:52.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:52.690 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.261 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.261 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:59.261 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:59.261 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:59.261 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:59.261 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:59.261 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:59.261 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:59.261 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:59.261 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:59.261 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:59.261 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:59.261 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:59.261 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:59.261 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:59.261 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.261 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:59.262 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:59.262 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:59.262 Found net devices under 0000:af:00.0: cvl_0_0 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:59.262 Found net devices under 0000:af:00.1: cvl_0_1 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:59.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:09:59.262 00:09:59.262 --- 10.0.0.2 ping statistics --- 00:09:59.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.262 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:09:59.262 00:09:59.262 --- 10.0.0.1 ping statistics --- 00:09:59.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.262 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:59.262 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:59.521 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:59.521 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:59.521 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.521 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.521 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=4162022 00:09:59.521 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 4162022 00:09:59.521 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:59.521 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 4162022 ']' 00:09:59.521 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.521 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.521 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.521 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.521 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.521 [2024-12-11 09:48:08.901131] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:09:59.521 [2024-12-11 09:48:08.901179] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.521 [2024-12-11 09:48:08.985680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.521 [2024-12-11 09:48:09.026548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.521 [2024-12-11 09:48:09.026591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.521 [2024-12-11 09:48:09.026598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.521 [2024-12-11 09:48:09.026604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.521 [2024-12-11 09:48:09.026609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.521 [2024-12-11 09:48:09.028262] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:59.521 [2024-12-11 09:48:09.028371] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:09:59.521 [2024-12-11 09:48:09.028477] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.521 [2024-12-11 09:48:09.028478] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.455 [2024-12-11 09:48:09.802574] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.455 Malloc0 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.455 [2024-12-11 09:48:09.860928] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:00.455 { 00:10:00.455 "params": { 00:10:00.455 "name": "Nvme$subsystem", 00:10:00.455 "trtype": "$TEST_TRANSPORT", 00:10:00.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:00.455 "adrfam": "ipv4", 00:10:00.455 "trsvcid": "$NVMF_PORT", 00:10:00.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:00.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:00.455 "hdgst": ${hdgst:-false}, 00:10:00.455 "ddgst": ${ddgst:-false} 00:10:00.455 }, 00:10:00.455 "method": "bdev_nvme_attach_controller" 00:10:00.455 } 00:10:00.455 EOF 00:10:00.455 )") 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:00.455 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:00.455 "params": { 00:10:00.455 "name": "Nvme1", 00:10:00.455 "trtype": "tcp", 00:10:00.455 "traddr": "10.0.0.2", 00:10:00.455 "adrfam": "ipv4", 00:10:00.455 "trsvcid": "4420", 00:10:00.455 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:00.455 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:00.455 "hdgst": false, 00:10:00.455 "ddgst": false 00:10:00.455 }, 00:10:00.455 "method": "bdev_nvme_attach_controller" 00:10:00.455 }' 00:10:00.455 [2024-12-11 09:48:09.911446] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:10:00.455 [2024-12-11 09:48:09.911492] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162439 ] 00:10:00.455 [2024-12-11 09:48:09.991310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:00.713 [2024-12-11 09:48:10.041257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.713 [2024-12-11 09:48:10.041364] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.713 [2024-12-11 09:48:10.041365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.713 I/O targets: 00:10:00.713 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:00.713 00:10:00.713 00:10:00.713 CUnit - A unit testing framework for C - Version 2.1-3 00:10:00.713 http://cunit.sourceforge.net/ 00:10:00.713 00:10:00.713 00:10:00.713 Suite: bdevio tests on: Nvme1n1 00:10:00.713 Test: blockdev write read block ...passed 00:10:00.971 Test: blockdev write zeroes read block ...passed 00:10:00.971 Test: blockdev write zeroes read no split ...passed 00:10:00.971 Test: blockdev write zeroes read split ...passed 00:10:00.971 Test: blockdev write zeroes read split partial ...passed 00:10:00.971 Test: blockdev reset ...[2024-12-11 09:48:10.315548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:00.971 [2024-12-11 09:48:10.315613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa469d0 (9): Bad file descriptor 00:10:00.971 [2024-12-11 09:48:10.418801] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:00.971 passed 00:10:00.971 Test: blockdev write read 8 blocks ...passed 00:10:00.971 Test: blockdev write read size > 128k ...passed 00:10:00.971 Test: blockdev write read invalid size ...passed 00:10:00.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:00.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:00.971 Test: blockdev write read max offset ...passed 00:10:01.228 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.228 Test: blockdev writev readv 8 blocks ...passed 00:10:01.228 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.228 Test: blockdev writev readv block ...passed 00:10:01.228 Test: blockdev writev readv size > 128k ...passed 00:10:01.228 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.228 Test: blockdev comparev and writev ...[2024-12-11 09:48:10.628015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.228 [2024-12-11 09:48:10.628042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:01.228 [2024-12-11 09:48:10.628056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.228 [2024-12-11 09:48:10.628063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:01.228 [2024-12-11 09:48:10.628308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.228 [2024-12-11 09:48:10.628319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:01.228 [2024-12-11 09:48:10.628330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.228 [2024-12-11 09:48:10.628338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:01.228 [2024-12-11 09:48:10.628554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.228 [2024-12-11 09:48:10.628564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:01.228 [2024-12-11 09:48:10.628576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.228 [2024-12-11 09:48:10.628584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:01.228 [2024-12-11 09:48:10.628824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.228 [2024-12-11 09:48:10.628835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:01.229 [2024-12-11 09:48:10.628845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.229 [2024-12-11 09:48:10.628853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:01.229 passed 00:10:01.229 Test: blockdev nvme passthru rw ...passed 00:10:01.229 Test: blockdev nvme passthru vendor specific ...[2024-12-11 09:48:10.710677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.229 [2024-12-11 09:48:10.710693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:01.229 [2024-12-11 09:48:10.710799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.229 [2024-12-11 09:48:10.710809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:01.229 [2024-12-11 09:48:10.710906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.229 [2024-12-11 09:48:10.710916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:01.229 [2024-12-11 09:48:10.711018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.229 [2024-12-11 09:48:10.711032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:01.229 passed 00:10:01.229 Test: blockdev nvme admin passthru ...passed 00:10:01.229 Test: blockdev copy ...passed 00:10:01.229 00:10:01.229 Run Summary: Type Total Ran Passed Failed Inactive 00:10:01.229 suites 1 1 n/a 0 0 00:10:01.229 tests 23 23 23 0 0 00:10:01.229 asserts 152 152 152 0 n/a 00:10:01.229 00:10:01.229 Elapsed time = 1.145 seconds 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:01.487 rmmod nvme_tcp 00:10:01.487 rmmod nvme_fabrics 00:10:01.487 rmmod nvme_keyring 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 4162022 ']' 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 4162022 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 4162022 ']' 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 4162022 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.487 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4162022 00:10:01.487 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:01.487 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:01.487 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4162022' 00:10:01.487 killing process with pid 4162022 00:10:01.487 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 4162022 00:10:01.487 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 4162022 00:10:01.746 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:01.746 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:01.746 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:01.746 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:01.746 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:01.746 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:01.746 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:01.746 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:01.746 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:01.746 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.746 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.746 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:04.282 00:10:04.282 real 0m11.483s 00:10:04.282 user 0m12.913s 00:10:04.282 sys 0m5.596s 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.282 ************************************ 00:10:04.282 END TEST nvmf_bdevio 00:10:04.282 ************************************ 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:04.282 00:10:04.282 real 4m50.846s 00:10:04.282 user 10m35.503s 00:10:04.282 sys 1m45.734s 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.282 ************************************ 00:10:04.282 END TEST nvmf_target_core 00:10:04.282 ************************************ 00:10:04.282 09:48:13 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:04.282 09:48:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.282 09:48:13 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.282 09:48:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:04.282 ************************************ 00:10:04.282 START TEST nvmf_target_extra 00:10:04.282 ************************************ 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:04.282 * Looking for test storage... 00:10:04.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:04.282 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:04.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.283 --rc genhtml_branch_coverage=1 00:10:04.283 --rc genhtml_function_coverage=1 00:10:04.283 --rc genhtml_legend=1 00:10:04.283 --rc geninfo_all_blocks=1 00:10:04.283 --rc geninfo_unexecuted_blocks=1 00:10:04.283 00:10:04.283 ' 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:04.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.283 --rc genhtml_branch_coverage=1 00:10:04.283 --rc genhtml_function_coverage=1 00:10:04.283 --rc genhtml_legend=1 00:10:04.283 --rc geninfo_all_blocks=1 00:10:04.283 --rc geninfo_unexecuted_blocks=1 00:10:04.283 00:10:04.283 ' 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:04.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.283 --rc genhtml_branch_coverage=1 00:10:04.283 --rc genhtml_function_coverage=1 00:10:04.283 --rc genhtml_legend=1 00:10:04.283 --rc geninfo_all_blocks=1 00:10:04.283 --rc geninfo_unexecuted_blocks=1 00:10:04.283 00:10:04.283 ' 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:04.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.283 --rc genhtml_branch_coverage=1 00:10:04.283 --rc genhtml_function_coverage=1 00:10:04.283 --rc genhtml_legend=1 00:10:04.283 --rc geninfo_all_blocks=1 00:10:04.283 --rc geninfo_unexecuted_blocks=1 00:10:04.283 00:10:04.283 ' 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:04.283 ************************************ 00:10:04.283 START TEST nvmf_example 00:10:04.283 ************************************ 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:04.283 * Looking for test storage... 00:10:04.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.283 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:04.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.284 --rc genhtml_branch_coverage=1 00:10:04.284 --rc genhtml_function_coverage=1 00:10:04.284 --rc genhtml_legend=1 00:10:04.284 --rc geninfo_all_blocks=1 00:10:04.284 --rc geninfo_unexecuted_blocks=1 00:10:04.284 00:10:04.284 ' 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:04.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.284 --rc genhtml_branch_coverage=1 00:10:04.284 --rc genhtml_function_coverage=1 00:10:04.284 --rc genhtml_legend=1 00:10:04.284 --rc geninfo_all_blocks=1 00:10:04.284 --rc geninfo_unexecuted_blocks=1 00:10:04.284 00:10:04.284 ' 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:04.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.284 --rc genhtml_branch_coverage=1 00:10:04.284 --rc genhtml_function_coverage=1 00:10:04.284 --rc genhtml_legend=1 00:10:04.284 --rc geninfo_all_blocks=1 00:10:04.284 --rc geninfo_unexecuted_blocks=1 00:10:04.284 00:10:04.284 ' 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:04.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.284 --rc genhtml_branch_coverage=1 00:10:04.284 --rc genhtml_function_coverage=1 00:10:04.284 --rc genhtml_legend=1 00:10:04.284 --rc geninfo_all_blocks=1 00:10:04.284 --rc geninfo_unexecuted_blocks=1 00:10:04.284 00:10:04.284 ' 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.284 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.543 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:04.543 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:04.543 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:04.543 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.116 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.116 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:11.116 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:11.116 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:11.116 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:11.116 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:11.117 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:11.117 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:11.117 Found net devices under 0000:af:00.0: cvl_0_0 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:11.117 Found net devices under 0000:af:00.1: cvl_0_1 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:11.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:10:11.117 00:10:11.117 --- 10.0.0.2 ping statistics --- 00:10:11.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.117 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:10:11.117 00:10:11.117 --- 10.0.0.1 ping statistics --- 00:10:11.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.117 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:11.117 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:11.118 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:11.118 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:11.118 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.118 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.118 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:11.118 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:11.118 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=4166662 00:10:11.118 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:11.118 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:11.118 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 4166662 00:10:11.118 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 4166662 ']' 00:10:11.118 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.118 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.118 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.118 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.118 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:12.048 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.048 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:12.048 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:12.048 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:12.048 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:12.048 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:12.048 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.048 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:12.306 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:24.600 Initializing NVMe Controllers 00:10:24.600 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:24.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:24.600 Initialization complete. Launching workers. 00:10:24.600 ======================================================== 00:10:24.600 Latency(us) 00:10:24.600 Device Information : IOPS MiB/s Average min max 00:10:24.600 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18307.48 71.51 3495.50 683.96 16049.50 00:10:24.600 ======================================================== 00:10:24.600 Total : 18307.48 71.51 3495.50 683.96 16049.50 00:10:24.600 00:10:24.600 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:24.600 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:24.600 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:24.600 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:24.600 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:24.600 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:24.600 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:24.600 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:24.600 rmmod nvme_tcp 00:10:24.600 rmmod nvme_fabrics 00:10:24.600 rmmod nvme_keyring 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 4166662 ']' 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 4166662 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 4166662 ']' 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 4166662 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4166662 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4166662' 00:10:24.600 killing process with pid 4166662 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 4166662 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 4166662 00:10:24.600 nvmf threads initialize successfully 00:10:24.600 bdev subsystem init successfully 00:10:24.600 created a nvmf target service 00:10:24.600 create targets's poll groups done 00:10:24.600 all subsystems of target started 00:10:24.600 nvmf target is running 00:10:24.600 all subsystems of target stopped 00:10:24.600 destroy targets's poll groups done 00:10:24.600 destroyed the nvmf target service 00:10:24.600 bdev subsystem finish successfully 00:10:24.600 nvmf threads destroy successfully 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.600 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.859 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:24.859 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:24.859 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:24.859 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.859 00:10:24.859 real 0m20.749s 00:10:24.859 user 0m46.574s 00:10:24.859 sys 0m6.717s 00:10:24.859 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.859 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.859 ************************************ 00:10:24.859 END TEST nvmf_example 00:10:24.859 ************************************ 00:10:24.859 09:48:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:24.859 09:48:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:24.859 09:48:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.859 09:48:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:25.120 ************************************ 00:10:25.120 START TEST nvmf_filesystem 00:10:25.120 ************************************ 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:25.120 * Looking for test storage... 00:10:25.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:25.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.120 --rc genhtml_branch_coverage=1 00:10:25.120 --rc genhtml_function_coverage=1 00:10:25.120 --rc genhtml_legend=1 00:10:25.120 --rc geninfo_all_blocks=1 00:10:25.120 --rc geninfo_unexecuted_blocks=1 00:10:25.120 00:10:25.120 ' 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:25.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.120 --rc genhtml_branch_coverage=1 00:10:25.120 --rc genhtml_function_coverage=1 00:10:25.120 --rc genhtml_legend=1 00:10:25.120 --rc geninfo_all_blocks=1 00:10:25.120 --rc geninfo_unexecuted_blocks=1 00:10:25.120 00:10:25.120 ' 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:25.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.120 --rc genhtml_branch_coverage=1 00:10:25.120 --rc genhtml_function_coverage=1 00:10:25.120 --rc genhtml_legend=1 00:10:25.120 --rc geninfo_all_blocks=1 00:10:25.120 --rc geninfo_unexecuted_blocks=1 00:10:25.120 00:10:25.120 ' 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:25.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.120 --rc genhtml_branch_coverage=1 00:10:25.120 --rc genhtml_function_coverage=1 00:10:25.120 --rc genhtml_legend=1 00:10:25.120 --rc geninfo_all_blocks=1 00:10:25.120 --rc geninfo_unexecuted_blocks=1 00:10:25.120 00:10:25.120 ' 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:25.120 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:25.121 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:25.121 #define SPDK_CONFIG_H 00:10:25.121 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:25.121 #define SPDK_CONFIG_APPS 1 00:10:25.121 #define SPDK_CONFIG_ARCH native 00:10:25.121 #undef SPDK_CONFIG_ASAN 00:10:25.121 #undef SPDK_CONFIG_AVAHI 00:10:25.121 #undef SPDK_CONFIG_CET 00:10:25.121 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:25.121 #define SPDK_CONFIG_COVERAGE 1 00:10:25.121 #define SPDK_CONFIG_CROSS_PREFIX 00:10:25.121 #undef SPDK_CONFIG_CRYPTO 00:10:25.121 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:25.121 #undef SPDK_CONFIG_CUSTOMOCF 00:10:25.121 #undef SPDK_CONFIG_DAOS 00:10:25.121 #define SPDK_CONFIG_DAOS_DIR 00:10:25.121 #define SPDK_CONFIG_DEBUG 1 00:10:25.121 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:25.121 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:25.121 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:25.121 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:25.121 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:25.121 #undef SPDK_CONFIG_DPDK_UADK 00:10:25.121 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:25.121 #define SPDK_CONFIG_EXAMPLES 1 00:10:25.121 #undef SPDK_CONFIG_FC 00:10:25.121 #define SPDK_CONFIG_FC_PATH 00:10:25.121 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:25.121 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:25.121 #define SPDK_CONFIG_FSDEV 1 00:10:25.121 #undef SPDK_CONFIG_FUSE 00:10:25.121 #undef SPDK_CONFIG_FUZZER 00:10:25.121 #define SPDK_CONFIG_FUZZER_LIB 00:10:25.121 #undef SPDK_CONFIG_GOLANG 00:10:25.121 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:25.121 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:25.121 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:25.121 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:25.121 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:25.121 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:25.121 #undef SPDK_CONFIG_HAVE_LZ4 00:10:25.121 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:25.121 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:25.121 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:25.121 #define SPDK_CONFIG_IDXD 1 00:10:25.121 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:25.121 #undef SPDK_CONFIG_IPSEC_MB 00:10:25.121 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:25.121 #define SPDK_CONFIG_ISAL 1 00:10:25.121 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:25.121 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:25.121 #define SPDK_CONFIG_LIBDIR 00:10:25.121 #undef SPDK_CONFIG_LTO 00:10:25.121 #define SPDK_CONFIG_MAX_LCORES 128 00:10:25.121 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:25.121 #define SPDK_CONFIG_NVME_CUSE 1 00:10:25.121 #undef SPDK_CONFIG_OCF 00:10:25.121 #define SPDK_CONFIG_OCF_PATH 00:10:25.121 #define SPDK_CONFIG_OPENSSL_PATH 00:10:25.121 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:25.121 #define SPDK_CONFIG_PGO_DIR 00:10:25.121 #undef SPDK_CONFIG_PGO_USE 00:10:25.121 #define SPDK_CONFIG_PREFIX /usr/local 00:10:25.121 #undef SPDK_CONFIG_RAID5F 00:10:25.121 #undef SPDK_CONFIG_RBD 00:10:25.121 #define SPDK_CONFIG_RDMA 1 00:10:25.121 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:25.121 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:25.122 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:25.122 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:25.122 #define SPDK_CONFIG_SHARED 1 00:10:25.122 #undef SPDK_CONFIG_SMA 00:10:25.122 #define SPDK_CONFIG_TESTS 1 00:10:25.122 #undef SPDK_CONFIG_TSAN 00:10:25.122 #define SPDK_CONFIG_UBLK 1 00:10:25.122 #define SPDK_CONFIG_UBSAN 1 00:10:25.122 #undef SPDK_CONFIG_UNIT_TESTS 00:10:25.122 #undef SPDK_CONFIG_URING 00:10:25.122 #define SPDK_CONFIG_URING_PATH 00:10:25.122 #undef SPDK_CONFIG_URING_ZNS 00:10:25.122 #undef SPDK_CONFIG_USDT 00:10:25.122 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:25.122 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:25.122 #define SPDK_CONFIG_VFIO_USER 1 00:10:25.122 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:25.122 #define SPDK_CONFIG_VHOST 1 00:10:25.122 #define SPDK_CONFIG_VIRTIO 1 00:10:25.122 #undef SPDK_CONFIG_VTUNE 00:10:25.122 #define SPDK_CONFIG_VTUNE_DIR 00:10:25.122 #define SPDK_CONFIG_WERROR 1 00:10:25.122 #define SPDK_CONFIG_WPDK_DIR 00:10:25.122 #undef SPDK_CONFIG_XNVME 00:10:25.122 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:25.122 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:25.122 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:25.122 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:25.122 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.122 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.122 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.122 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.122 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.122 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.122 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:25.122 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.122 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:25.122 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:25.122 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:25.122 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:25.122 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:25.383 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:25.384 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 4169009 ]] 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 4169009 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.vFfuPX 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:25.385 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.vFfuPX/tests/target /tmp/spdk.vFfuPX 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=93012496384 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100837199872 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7824703488 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50408566784 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418597888 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=20144234496 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20167442432 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23207936 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50418159616 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=442368 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=10083704832 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=10083717120 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:25.386 * Looking for test storage... 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=93012496384 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=10039296000 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.386 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:25.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.387 --rc genhtml_branch_coverage=1 00:10:25.387 --rc genhtml_function_coverage=1 00:10:25.387 --rc genhtml_legend=1 00:10:25.387 --rc geninfo_all_blocks=1 00:10:25.387 --rc geninfo_unexecuted_blocks=1 00:10:25.387 00:10:25.387 ' 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:25.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.387 --rc genhtml_branch_coverage=1 00:10:25.387 --rc genhtml_function_coverage=1 00:10:25.387 --rc genhtml_legend=1 00:10:25.387 --rc geninfo_all_blocks=1 00:10:25.387 --rc geninfo_unexecuted_blocks=1 00:10:25.387 00:10:25.387 ' 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:25.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.387 --rc genhtml_branch_coverage=1 00:10:25.387 --rc genhtml_function_coverage=1 00:10:25.387 --rc genhtml_legend=1 00:10:25.387 --rc geninfo_all_blocks=1 00:10:25.387 --rc geninfo_unexecuted_blocks=1 00:10:25.387 00:10:25.387 ' 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:25.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.387 --rc genhtml_branch_coverage=1 00:10:25.387 --rc genhtml_function_coverage=1 00:10:25.387 --rc genhtml_legend=1 00:10:25.387 --rc geninfo_all_blocks=1 00:10:25.387 --rc geninfo_unexecuted_blocks=1 00:10:25.387 00:10:25.387 ' 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:25.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:25.387 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:31.955 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:31.955 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.955 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:31.956 Found net devices under 0000:af:00.0: cvl_0_0 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:31.956 Found net devices under 0000:af:00.1: cvl_0_1 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:31.956 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:32.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:10:32.215 00:10:32.215 --- 10.0.0.2 ping statistics --- 00:10:32.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.215 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:10:32.215 00:10:32.215 --- 10.0.0.1 ping statistics --- 00:10:32.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.215 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.215 ************************************ 00:10:32.215 START TEST nvmf_filesystem_no_in_capsule 00:10:32.215 ************************************ 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=4172627 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 4172627 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 4172627 ']' 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.215 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.216 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.473 [2024-12-11 09:48:41.822039] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:10:32.473 [2024-12-11 09:48:41.822080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.473 [2024-12-11 09:48:41.904164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.473 [2024-12-11 09:48:41.942957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.473 [2024-12-11 09:48:41.942993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.473 [2024-12-11 09:48:41.943000] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.473 [2024-12-11 09:48:41.943006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.473 [2024-12-11 09:48:41.943011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.473 [2024-12-11 09:48:41.944628] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.473 [2024-12-11 09:48:41.944657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.473 [2024-12-11 09:48:41.944768] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.473 [2024-12-11 09:48:41.944770] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.405 [2024-12-11 09:48:42.691710] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.405 Malloc1 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.405 [2024-12-11 09:48:42.849284] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:33.405 { 00:10:33.405 "name": "Malloc1", 00:10:33.405 "aliases": [ 00:10:33.405 "665888e4-380a-47d1-9aaa-67dfac52e92b" 00:10:33.405 ], 00:10:33.405 "product_name": "Malloc disk", 00:10:33.405 "block_size": 512, 00:10:33.405 "num_blocks": 1048576, 00:10:33.405 "uuid": "665888e4-380a-47d1-9aaa-67dfac52e92b", 00:10:33.405 "assigned_rate_limits": { 00:10:33.405 "rw_ios_per_sec": 0, 00:10:33.405 "rw_mbytes_per_sec": 0, 00:10:33.405 "r_mbytes_per_sec": 0, 00:10:33.405 "w_mbytes_per_sec": 0 00:10:33.405 }, 00:10:33.405 "claimed": true, 00:10:33.405 "claim_type": "exclusive_write", 00:10:33.405 "zoned": false, 00:10:33.405 "supported_io_types": { 00:10:33.405 "read": true, 00:10:33.405 "write": true, 00:10:33.405 "unmap": true, 00:10:33.405 "flush": true, 00:10:33.405 "reset": true, 00:10:33.405 "nvme_admin": false, 00:10:33.405 "nvme_io": false, 00:10:33.405 "nvme_io_md": false, 00:10:33.405 "write_zeroes": true, 00:10:33.405 "zcopy": true, 00:10:33.405 "get_zone_info": false, 00:10:33.405 "zone_management": false, 00:10:33.405 "zone_append": false, 00:10:33.405 "compare": false, 00:10:33.405 "compare_and_write": false, 00:10:33.405 "abort": true, 00:10:33.405 "seek_hole": false, 00:10:33.405 "seek_data": false, 00:10:33.405 "copy": true, 00:10:33.405 "nvme_iov_md": false 00:10:33.405 }, 00:10:33.405 "memory_domains": [ 00:10:33.405 { 00:10:33.405 "dma_device_id": "system", 00:10:33.405 "dma_device_type": 1 00:10:33.405 }, 00:10:33.405 { 00:10:33.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.405 "dma_device_type": 2 00:10:33.405 } 00:10:33.405 ], 00:10:33.405 "driver_specific": {} 00:10:33.405 } 00:10:33.405 ]' 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:33.405 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:34.777 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:34.777 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:34.777 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:34.777 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:34.777 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:36.679 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:36.679 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:36.679 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:36.679 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:36.679 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:36.679 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:36.679 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:36.679 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:36.679 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:36.679 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:36.679 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:36.679 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:36.679 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:36.679 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:36.679 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:36.679 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:36.679 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:36.936 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:37.869 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:38.801 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:38.801 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:38.801 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:38.801 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.801 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.801 ************************************ 00:10:38.801 START TEST filesystem_ext4 00:10:38.801 ************************************ 00:10:38.801 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:38.801 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:38.801 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:38.801 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:38.801 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:38.801 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:38.801 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:38.801 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:38.801 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:38.801 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:38.801 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:38.801 mke2fs 1.47.0 (5-Feb-2023) 00:10:38.801 Discarding device blocks: 0/522240 done 00:10:38.801 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:38.801 Filesystem UUID: 056cf5de-5ac3-4f74-a5a6-33f1c4b9fadd 00:10:38.801 Superblock backups stored on blocks: 00:10:38.801 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:38.801 00:10:38.801 Allocating group tables: 0/64 done 00:10:38.801 Writing inode tables: 0/64 done 00:10:39.058 Creating journal (8192 blocks): done 00:10:39.059 Writing superblocks and filesystem accounting information: 0/64 done 00:10:39.059 00:10:39.059 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:39.059 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:44.313 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:44.313 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:44.313 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:44.313 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:44.313 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:44.313 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:44.313 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 4172627 00:10:44.313 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:44.313 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:44.570 00:10:44.570 real 0m5.627s 00:10:44.570 user 0m0.024s 00:10:44.570 sys 0m0.074s 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:44.570 ************************************ 00:10:44.570 END TEST filesystem_ext4 00:10:44.570 ************************************ 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.570 ************************************ 00:10:44.570 START TEST filesystem_btrfs 00:10:44.570 ************************************ 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:44.570 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:44.826 btrfs-progs v6.8.1 00:10:44.826 See https://btrfs.readthedocs.io for more information. 00:10:44.826 00:10:44.826 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:44.826 NOTE: several default settings have changed in version 5.15, please make sure 00:10:44.826 this does not affect your deployments: 00:10:44.826 - DUP for metadata (-m dup) 00:10:44.826 - enabled no-holes (-O no-holes) 00:10:44.826 - enabled free-space-tree (-R free-space-tree) 00:10:44.826 00:10:44.826 Label: (null) 00:10:44.826 UUID: c06c768c-37bb-4ea5-b0f5-909396547e5f 00:10:44.826 Node size: 16384 00:10:44.826 Sector size: 4096 (CPU page size: 4096) 00:10:44.826 Filesystem size: 510.00MiB 00:10:44.826 Block group profiles: 00:10:44.826 Data: single 8.00MiB 00:10:44.826 Metadata: DUP 32.00MiB 00:10:44.826 System: DUP 8.00MiB 00:10:44.826 SSD detected: yes 00:10:44.826 Zoned device: no 00:10:44.826 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:44.826 Checksum: crc32c 00:10:44.826 Number of devices: 1 00:10:44.826 Devices: 00:10:44.826 ID SIZE PATH 00:10:44.826 1 510.00MiB /dev/nvme0n1p1 00:10:44.826 00:10:44.826 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:44.826 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:45.390 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:45.390 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:45.390 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:45.390 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:45.390 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:45.390 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:45.655 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 4172627 00:10:45.655 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:45.655 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:45.655 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:45.655 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:45.655 00:10:45.655 real 0m1.031s 00:10:45.655 user 0m0.026s 00:10:45.655 sys 0m0.112s 00:10:45.655 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.655 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:45.655 ************************************ 00:10:45.655 END TEST filesystem_btrfs 00:10:45.655 ************************************ 00:10:45.655 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:45.655 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:45.655 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.655 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.655 ************************************ 00:10:45.655 START TEST filesystem_xfs 00:10:45.655 ************************************ 00:10:45.655 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:45.655 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:45.655 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:45.655 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:45.655 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:45.655 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:45.655 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:45.655 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:45.655 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:45.655 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:45.655 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:45.655 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:45.655 = sectsz=512 attr=2, projid32bit=1 00:10:45.655 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:45.655 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:45.655 data = bsize=4096 blocks=130560, imaxpct=25 00:10:45.655 = sunit=0 swidth=0 blks 00:10:45.655 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:45.655 log =internal log bsize=4096 blocks=16384, version=2 00:10:45.655 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:45.655 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:47.022 Discarding blocks...Done. 00:10:47.022 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:47.022 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:49.542 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:49.542 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:49.542 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:49.542 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:49.542 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:49.542 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:49.542 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 4172627 00:10:49.542 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:49.542 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:49.542 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:49.542 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:49.542 00:10:49.542 real 0m3.711s 00:10:49.542 user 0m0.032s 00:10:49.542 sys 0m0.067s 00:10:49.542 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.542 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:49.542 ************************************ 00:10:49.542 END TEST filesystem_xfs 00:10:49.542 ************************************ 00:10:49.542 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:49.542 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:49.542 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:49.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.542 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:49.542 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:49.542 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:49.542 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 4172627 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 4172627 ']' 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 4172627 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4172627 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4172627' 00:10:49.543 killing process with pid 4172627 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 4172627 00:10:49.543 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 4172627 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:50.108 00:10:50.108 real 0m17.668s 00:10:50.108 user 1m9.696s 00:10:50.108 sys 0m1.420s 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.108 ************************************ 00:10:50.108 END TEST nvmf_filesystem_no_in_capsule 00:10:50.108 ************************************ 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.108 ************************************ 00:10:50.108 START TEST nvmf_filesystem_in_capsule 00:10:50.108 ************************************ 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=4175616 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 4175616 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 4175616 ']' 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.108 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.108 [2024-12-11 09:48:59.560807] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:10:50.108 [2024-12-11 09:48:59.560847] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.108 [2024-12-11 09:48:59.647511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.365 [2024-12-11 09:48:59.688405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.365 [2024-12-11 09:48:59.688443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.365 [2024-12-11 09:48:59.688450] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.365 [2024-12-11 09:48:59.688457] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.365 [2024-12-11 09:48:59.688462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.365 [2024-12-11 09:48:59.689979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.365 [2024-12-11 09:48:59.690087] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.365 [2024-12-11 09:48:59.690196] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.365 [2024-12-11 09:48:59.690197] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.929 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.929 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:50.929 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:50.929 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.929 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.929 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.929 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:50.929 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:50.929 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.929 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.929 [2024-12-11 09:49:00.445705] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.929 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.929 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:50.929 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.929 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.186 Malloc1 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.186 [2024-12-11 09:49:00.605390] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:51.186 { 00:10:51.186 "name": "Malloc1", 00:10:51.186 "aliases": [ 00:10:51.186 "6f869ab0-27ce-4dfc-86ed-1a766367b2d8" 00:10:51.186 ], 00:10:51.186 "product_name": "Malloc disk", 00:10:51.186 "block_size": 512, 00:10:51.186 "num_blocks": 1048576, 00:10:51.186 "uuid": "6f869ab0-27ce-4dfc-86ed-1a766367b2d8", 00:10:51.186 "assigned_rate_limits": { 00:10:51.186 "rw_ios_per_sec": 0, 00:10:51.186 "rw_mbytes_per_sec": 0, 00:10:51.186 "r_mbytes_per_sec": 0, 00:10:51.186 "w_mbytes_per_sec": 0 00:10:51.186 }, 00:10:51.186 "claimed": true, 00:10:51.186 "claim_type": "exclusive_write", 00:10:51.186 "zoned": false, 00:10:51.186 "supported_io_types": { 00:10:51.186 "read": true, 00:10:51.186 "write": true, 00:10:51.186 "unmap": true, 00:10:51.186 "flush": true, 00:10:51.186 "reset": true, 00:10:51.186 "nvme_admin": false, 00:10:51.186 "nvme_io": false, 00:10:51.186 "nvme_io_md": false, 00:10:51.186 "write_zeroes": true, 00:10:51.186 "zcopy": true, 00:10:51.186 "get_zone_info": false, 00:10:51.186 "zone_management": false, 00:10:51.186 "zone_append": false, 00:10:51.186 "compare": false, 00:10:51.186 "compare_and_write": false, 00:10:51.186 "abort": true, 00:10:51.186 "seek_hole": false, 00:10:51.186 "seek_data": false, 00:10:51.186 "copy": true, 00:10:51.186 "nvme_iov_md": false 00:10:51.186 }, 00:10:51.186 "memory_domains": [ 00:10:51.186 { 00:10:51.186 "dma_device_id": "system", 00:10:51.186 "dma_device_type": 1 00:10:51.186 }, 00:10:51.186 { 00:10:51.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.186 "dma_device_type": 2 00:10:51.186 } 00:10:51.186 ], 00:10:51.186 "driver_specific": {} 00:10:51.186 } 00:10:51.186 ]' 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:51.186 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:52.555 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:52.555 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:52.555 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:52.555 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:52.555 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:54.448 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:54.448 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:54.448 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.448 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:54.448 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.448 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:54.448 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:54.448 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:54.448 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:54.448 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:54.448 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:54.448 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:54.448 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:54.448 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:54.448 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:54.448 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:54.448 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:54.704 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:55.267 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:56.197 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:56.197 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:56.197 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:56.197 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.197 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.197 ************************************ 00:10:56.197 START TEST filesystem_in_capsule_ext4 00:10:56.197 ************************************ 00:10:56.197 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:56.197 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:56.197 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:56.197 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:56.197 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:56.197 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:56.197 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:56.197 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:56.197 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:56.197 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:56.197 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:56.197 mke2fs 1.47.0 (5-Feb-2023) 00:10:56.454 Discarding device blocks: 0/522240 done 00:10:56.454 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:56.454 Filesystem UUID: 54b4f45a-ed70-4c8a-b401-9f60aa2fe623 00:10:56.454 Superblock backups stored on blocks: 00:10:56.454 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:56.454 00:10:56.454 Allocating group tables: 0/64 done 00:10:56.454 Writing inode tables: 0/64 done 00:10:56.454 Creating journal (8192 blocks): done 00:10:58.752 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:10:58.752 00:10:58.752 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:58.752 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 4175616 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:04.010 00:11:04.010 real 0m7.818s 00:11:04.010 user 0m0.027s 00:11:04.010 sys 0m0.076s 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:04.010 ************************************ 00:11:04.010 END TEST filesystem_in_capsule_ext4 00:11:04.010 ************************************ 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.010 ************************************ 00:11:04.010 START TEST filesystem_in_capsule_btrfs 00:11:04.010 ************************************ 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:04.010 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:04.268 btrfs-progs v6.8.1 00:11:04.268 See https://btrfs.readthedocs.io for more information. 00:11:04.268 00:11:04.268 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:04.268 NOTE: several default settings have changed in version 5.15, please make sure 00:11:04.268 this does not affect your deployments: 00:11:04.268 - DUP for metadata (-m dup) 00:11:04.268 - enabled no-holes (-O no-holes) 00:11:04.268 - enabled free-space-tree (-R free-space-tree) 00:11:04.268 00:11:04.268 Label: (null) 00:11:04.268 UUID: db7cce0c-4a65-4d00-bd9e-2aaacb9d9293 00:11:04.268 Node size: 16384 00:11:04.268 Sector size: 4096 (CPU page size: 4096) 00:11:04.268 Filesystem size: 510.00MiB 00:11:04.268 Block group profiles: 00:11:04.268 Data: single 8.00MiB 00:11:04.268 Metadata: DUP 32.00MiB 00:11:04.268 System: DUP 8.00MiB 00:11:04.268 SSD detected: yes 00:11:04.268 Zoned device: no 00:11:04.268 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:04.268 Checksum: crc32c 00:11:04.268 Number of devices: 1 00:11:04.268 Devices: 00:11:04.268 ID SIZE PATH 00:11:04.268 1 510.00MiB /dev/nvme0n1p1 00:11:04.268 00:11:04.268 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:04.268 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 4175616 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:04.834 00:11:04.834 real 0m0.619s 00:11:04.834 user 0m0.028s 00:11:04.834 sys 0m0.114s 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:04.834 ************************************ 00:11:04.834 END TEST filesystem_in_capsule_btrfs 00:11:04.834 ************************************ 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.834 ************************************ 00:11:04.834 START TEST filesystem_in_capsule_xfs 00:11:04.834 ************************************ 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:04.834 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:04.834 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:04.834 = sectsz=512 attr=2, projid32bit=1 00:11:04.834 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:04.834 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:04.834 data = bsize=4096 blocks=130560, imaxpct=25 00:11:04.834 = sunit=0 swidth=0 blks 00:11:04.834 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:04.834 log =internal log bsize=4096 blocks=16384, version=2 00:11:04.834 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:04.834 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:05.765 Discarding blocks...Done. 00:11:05.765 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:05.765 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 4175616 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:08.289 00:11:08.289 real 0m3.342s 00:11:08.289 user 0m0.020s 00:11:08.289 sys 0m0.082s 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:08.289 ************************************ 00:11:08.289 END TEST filesystem_in_capsule_xfs 00:11:08.289 ************************************ 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:08.289 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 4175616 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 4175616 ']' 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 4175616 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4175616 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4175616' 00:11:08.547 killing process with pid 4175616 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 4175616 00:11:08.547 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 4175616 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:08.806 00:11:08.806 real 0m18.758s 00:11:08.806 user 1m13.970s 00:11:08.806 sys 0m1.508s 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.806 ************************************ 00:11:08.806 END TEST nvmf_filesystem_in_capsule 00:11:08.806 ************************************ 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.806 rmmod nvme_tcp 00:11:08.806 rmmod nvme_fabrics 00:11:08.806 rmmod nvme_keyring 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.806 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.342 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:11.342 00:11:11.342 real 0m45.975s 00:11:11.342 user 2m25.907s 00:11:11.342 sys 0m8.288s 00:11:11.342 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.342 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:11.342 ************************************ 00:11:11.342 END TEST nvmf_filesystem 00:11:11.342 ************************************ 00:11:11.342 09:49:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:11.343 ************************************ 00:11:11.343 START TEST nvmf_target_discovery 00:11:11.343 ************************************ 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:11.343 * Looking for test storage... 00:11:11.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:11.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.343 --rc genhtml_branch_coverage=1 00:11:11.343 --rc genhtml_function_coverage=1 00:11:11.343 --rc genhtml_legend=1 00:11:11.343 --rc geninfo_all_blocks=1 00:11:11.343 --rc geninfo_unexecuted_blocks=1 00:11:11.343 00:11:11.343 ' 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:11.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.343 --rc genhtml_branch_coverage=1 00:11:11.343 --rc genhtml_function_coverage=1 00:11:11.343 --rc genhtml_legend=1 00:11:11.343 --rc geninfo_all_blocks=1 00:11:11.343 --rc geninfo_unexecuted_blocks=1 00:11:11.343 00:11:11.343 ' 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:11.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.343 --rc genhtml_branch_coverage=1 00:11:11.343 --rc genhtml_function_coverage=1 00:11:11.343 --rc genhtml_legend=1 00:11:11.343 --rc geninfo_all_blocks=1 00:11:11.343 --rc geninfo_unexecuted_blocks=1 00:11:11.343 00:11:11.343 ' 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:11.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.343 --rc genhtml_branch_coverage=1 00:11:11.343 --rc genhtml_function_coverage=1 00:11:11.343 --rc genhtml_legend=1 00:11:11.343 --rc geninfo_all_blocks=1 00:11:11.343 --rc geninfo_unexecuted_blocks=1 00:11:11.343 00:11:11.343 ' 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:11.343 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:11.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:11.344 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.913 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:17.914 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:17.914 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:17.914 Found net devices under 0000:af:00.0: cvl_0_0 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:17.914 Found net devices under 0000:af:00.1: cvl_0_1 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:17.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:11:17.914 00:11:17.914 --- 10.0.0.2 ping statistics --- 00:11:17.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.914 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:11:17.914 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:11:18.172 00:11:18.172 --- 10.0.0.1 ping statistics --- 00:11:18.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.172 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=4182805 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 4182805 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 4182805 ']' 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.172 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.172 [2024-12-11 09:49:27.581928] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:11:18.172 [2024-12-11 09:49:27.581971] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.172 [2024-12-11 09:49:27.666232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.172 [2024-12-11 09:49:27.707180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.172 [2024-12-11 09:49:27.707215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.172 [2024-12-11 09:49:27.707227] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.172 [2024-12-11 09:49:27.707233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.172 [2024-12-11 09:49:27.707238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.172 [2024-12-11 09:49:27.708640] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.172 [2024-12-11 09:49:27.708746] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.173 [2024-12-11 09:49:27.708856] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.173 [2024-12-11 09:49:27.708858] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.109 [2024-12-11 09:49:28.473209] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.109 Null1 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.109 [2024-12-11 09:49:28.536338] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.109 Null2 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.109 Null3 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.109 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.109 Null4 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.110 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:19.418 00:11:19.418 Discovery Log Number of Records 6, Generation counter 6 00:11:19.418 =====Discovery Log Entry 0====== 00:11:19.418 trtype: tcp 00:11:19.418 adrfam: ipv4 00:11:19.418 subtype: current discovery subsystem 00:11:19.418 treq: not required 00:11:19.418 portid: 0 00:11:19.418 trsvcid: 4420 00:11:19.418 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:19.418 traddr: 10.0.0.2 00:11:19.418 eflags: explicit discovery connections, duplicate discovery information 00:11:19.418 sectype: none 00:11:19.418 =====Discovery Log Entry 1====== 00:11:19.418 trtype: tcp 00:11:19.418 adrfam: ipv4 00:11:19.418 subtype: nvme subsystem 00:11:19.418 treq: not required 00:11:19.418 portid: 0 00:11:19.418 trsvcid: 4420 00:11:19.418 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:19.418 traddr: 10.0.0.2 00:11:19.418 eflags: none 00:11:19.418 sectype: none 00:11:19.418 =====Discovery Log Entry 2====== 00:11:19.418 trtype: tcp 00:11:19.418 adrfam: ipv4 00:11:19.418 subtype: nvme subsystem 00:11:19.418 treq: not required 00:11:19.418 portid: 0 00:11:19.418 trsvcid: 4420 00:11:19.418 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:19.418 traddr: 10.0.0.2 00:11:19.418 eflags: none 00:11:19.418 sectype: none 00:11:19.418 =====Discovery Log Entry 3====== 00:11:19.418 trtype: tcp 00:11:19.418 adrfam: ipv4 00:11:19.418 subtype: nvme subsystem 00:11:19.418 treq: not required 00:11:19.418 portid: 0 00:11:19.418 trsvcid: 4420 00:11:19.418 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:19.418 traddr: 10.0.0.2 00:11:19.418 eflags: none 00:11:19.418 sectype: none 00:11:19.418 =====Discovery Log Entry 4====== 00:11:19.418 trtype: tcp 00:11:19.418 adrfam: ipv4 00:11:19.418 subtype: nvme subsystem 00:11:19.418 treq: not required 00:11:19.418 portid: 0 00:11:19.418 trsvcid: 4420 00:11:19.418 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:19.418 traddr: 10.0.0.2 00:11:19.418 eflags: none 00:11:19.418 sectype: none 00:11:19.418 =====Discovery Log Entry 5====== 00:11:19.418 trtype: tcp 00:11:19.418 adrfam: ipv4 00:11:19.418 subtype: discovery subsystem referral 00:11:19.418 treq: not required 00:11:19.418 portid: 0 00:11:19.418 trsvcid: 4430 00:11:19.418 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:19.418 traddr: 10.0.0.2 00:11:19.418 eflags: none 00:11:19.418 sectype: none 00:11:19.418 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:19.418 Perform nvmf subsystem discovery via RPC 00:11:19.418 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:19.418 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.418 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.418 [ 00:11:19.418 { 00:11:19.418 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:19.418 "subtype": "Discovery", 00:11:19.418 "listen_addresses": [ 00:11:19.418 { 00:11:19.418 "trtype": "TCP", 00:11:19.418 "adrfam": "IPv4", 00:11:19.418 "traddr": "10.0.0.2", 00:11:19.418 "trsvcid": "4420" 00:11:19.418 } 00:11:19.418 ], 00:11:19.418 "allow_any_host": true, 00:11:19.418 "hosts": [] 00:11:19.418 }, 00:11:19.418 { 00:11:19.418 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:19.418 "subtype": "NVMe", 00:11:19.418 "listen_addresses": [ 00:11:19.418 { 00:11:19.418 "trtype": "TCP", 00:11:19.418 "adrfam": "IPv4", 00:11:19.418 "traddr": "10.0.0.2", 00:11:19.418 "trsvcid": "4420" 00:11:19.418 } 00:11:19.418 ], 00:11:19.419 "allow_any_host": true, 00:11:19.419 "hosts": [], 00:11:19.419 "serial_number": "SPDK00000000000001", 00:11:19.419 "model_number": "SPDK bdev Controller", 00:11:19.419 "max_namespaces": 32, 00:11:19.419 "min_cntlid": 1, 00:11:19.419 "max_cntlid": 65519, 00:11:19.419 "namespaces": [ 00:11:19.419 { 00:11:19.419 "nsid": 1, 00:11:19.419 "bdev_name": "Null1", 00:11:19.419 "name": "Null1", 00:11:19.419 "nguid": "8842288F2FAD4409B4315DBF5C864F38", 00:11:19.419 "uuid": "8842288f-2fad-4409-b431-5dbf5c864f38" 00:11:19.419 } 00:11:19.419 ] 00:11:19.419 }, 00:11:19.419 { 00:11:19.419 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:19.419 "subtype": "NVMe", 00:11:19.419 "listen_addresses": [ 00:11:19.419 { 00:11:19.419 "trtype": "TCP", 00:11:19.419 "adrfam": "IPv4", 00:11:19.419 "traddr": "10.0.0.2", 00:11:19.419 "trsvcid": "4420" 00:11:19.419 } 00:11:19.419 ], 00:11:19.419 "allow_any_host": true, 00:11:19.419 "hosts": [], 00:11:19.419 "serial_number": "SPDK00000000000002", 00:11:19.419 "model_number": "SPDK bdev Controller", 00:11:19.419 "max_namespaces": 32, 00:11:19.419 "min_cntlid": 1, 00:11:19.419 "max_cntlid": 65519, 00:11:19.419 "namespaces": [ 00:11:19.419 { 00:11:19.419 "nsid": 1, 00:11:19.419 "bdev_name": "Null2", 00:11:19.419 "name": "Null2", 00:11:19.419 "nguid": "6DBC05C20DC24C5E8BA469FB07D66DE9", 00:11:19.419 "uuid": "6dbc05c2-0dc2-4c5e-8ba4-69fb07d66de9" 00:11:19.419 } 00:11:19.419 ] 00:11:19.419 }, 00:11:19.419 { 00:11:19.419 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:19.419 "subtype": "NVMe", 00:11:19.419 "listen_addresses": [ 00:11:19.419 { 00:11:19.419 "trtype": "TCP", 00:11:19.419 "adrfam": "IPv4", 00:11:19.419 "traddr": "10.0.0.2", 00:11:19.419 "trsvcid": "4420" 00:11:19.419 } 00:11:19.419 ], 00:11:19.419 "allow_any_host": true, 00:11:19.419 "hosts": [], 00:11:19.419 "serial_number": "SPDK00000000000003", 00:11:19.419 "model_number": "SPDK bdev Controller", 00:11:19.419 "max_namespaces": 32, 00:11:19.419 "min_cntlid": 1, 00:11:19.419 "max_cntlid": 65519, 00:11:19.419 "namespaces": [ 00:11:19.419 { 00:11:19.419 "nsid": 1, 00:11:19.419 "bdev_name": "Null3", 00:11:19.419 "name": "Null3", 00:11:19.419 "nguid": "758356957BAC4F0EA8586AEE4FEBFE07", 00:11:19.419 "uuid": "75835695-7bac-4f0e-a858-6aee4febfe07" 00:11:19.419 } 00:11:19.419 ] 00:11:19.419 }, 00:11:19.419 { 00:11:19.419 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:19.419 "subtype": "NVMe", 00:11:19.419 "listen_addresses": [ 00:11:19.419 { 00:11:19.419 "trtype": "TCP", 00:11:19.419 "adrfam": "IPv4", 00:11:19.419 "traddr": "10.0.0.2", 00:11:19.419 "trsvcid": "4420" 00:11:19.419 } 00:11:19.419 ], 00:11:19.419 "allow_any_host": true, 00:11:19.419 "hosts": [], 00:11:19.419 "serial_number": "SPDK00000000000004", 00:11:19.419 "model_number": "SPDK bdev Controller", 00:11:19.419 "max_namespaces": 32, 00:11:19.419 "min_cntlid": 1, 00:11:19.419 "max_cntlid": 65519, 00:11:19.419 "namespaces": [ 00:11:19.419 { 00:11:19.419 "nsid": 1, 00:11:19.419 "bdev_name": "Null4", 00:11:19.419 "name": "Null4", 00:11:19.419 "nguid": "BBA9A0A35DE043C2831468968B257D96", 00:11:19.419 "uuid": "bba9a0a3-5de0-43c2-8314-68968b257d96" 00:11:19.419 } 00:11:19.419 ] 00:11:19.419 } 00:11:19.419 ] 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.419 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:19.689 rmmod nvme_tcp 00:11:19.689 rmmod nvme_fabrics 00:11:19.689 rmmod nvme_keyring 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 4182805 ']' 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 4182805 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 4182805 ']' 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 4182805 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4182805 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4182805' 00:11:19.689 killing process with pid 4182805 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 4182805 00:11:19.689 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 4182805 00:11:20.023 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:20.024 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:20.024 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:20.024 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:20.024 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:20.024 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:20.024 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:20.024 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:20.024 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:20.024 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.024 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.024 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.928 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:21.928 00:11:21.928 real 0m10.871s 00:11:21.928 user 0m8.657s 00:11:21.928 sys 0m5.483s 00:11:21.928 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.928 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.928 ************************************ 00:11:21.928 END TEST nvmf_target_discovery 00:11:21.928 ************************************ 00:11:21.928 09:49:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:21.928 09:49:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:21.928 09:49:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.928 09:49:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:21.928 ************************************ 00:11:21.928 START TEST nvmf_referrals 00:11:21.928 ************************************ 00:11:21.928 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:22.188 * Looking for test storage... 00:11:22.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:22.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.188 --rc genhtml_branch_coverage=1 00:11:22.188 --rc genhtml_function_coverage=1 00:11:22.188 --rc genhtml_legend=1 00:11:22.188 --rc geninfo_all_blocks=1 00:11:22.188 --rc geninfo_unexecuted_blocks=1 00:11:22.188 00:11:22.188 ' 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:22.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.188 --rc genhtml_branch_coverage=1 00:11:22.188 --rc genhtml_function_coverage=1 00:11:22.188 --rc genhtml_legend=1 00:11:22.188 --rc geninfo_all_blocks=1 00:11:22.188 --rc geninfo_unexecuted_blocks=1 00:11:22.188 00:11:22.188 ' 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:22.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.188 --rc genhtml_branch_coverage=1 00:11:22.188 --rc genhtml_function_coverage=1 00:11:22.188 --rc genhtml_legend=1 00:11:22.188 --rc geninfo_all_blocks=1 00:11:22.188 --rc geninfo_unexecuted_blocks=1 00:11:22.188 00:11:22.188 ' 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:22.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.188 --rc genhtml_branch_coverage=1 00:11:22.188 --rc genhtml_function_coverage=1 00:11:22.188 --rc genhtml_legend=1 00:11:22.188 --rc geninfo_all_blocks=1 00:11:22.188 --rc geninfo_unexecuted_blocks=1 00:11:22.188 00:11:22.188 ' 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.188 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:22.189 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.758 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.758 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:28.758 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:28.758 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:28.758 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:28.758 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:28.758 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:28.758 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:28.758 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:28.758 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:28.758 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:28.758 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:28.758 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:28.759 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:28.759 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:28.759 Found net devices under 0000:af:00.0: cvl_0_0 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:28.759 Found net devices under 0000:af:00.1: cvl_0_1 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:28.759 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:29.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:11:29.018 00:11:29.018 --- 10.0.0.2 ping statistics --- 00:11:29.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.018 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:11:29.018 00:11:29.018 --- 10.0.0.1 ping statistics --- 00:11:29.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.018 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=4187065 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 4187065 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 4187065 ']' 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.018 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.018 [2024-12-11 09:49:38.512490] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:11:29.018 [2024-12-11 09:49:38.512539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.276 [2024-12-11 09:49:38.595909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.276 [2024-12-11 09:49:38.636698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.276 [2024-12-11 09:49:38.636739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.276 [2024-12-11 09:49:38.636747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.276 [2024-12-11 09:49:38.636753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.276 [2024-12-11 09:49:38.636758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.276 [2024-12-11 09:49:38.638198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.276 [2024-12-11 09:49:38.638316] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.276 [2024-12-11 09:49:38.638349] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.276 [2024-12-11 09:49:38.638350] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.276 [2024-12-11 09:49:38.775996] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.276 [2024-12-11 09:49:38.798351] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.276 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.533 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:29.533 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:29.533 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:29.533 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:29.533 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:29.533 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.533 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:29.533 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.533 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.533 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:29.533 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:29.533 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:29.533 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:29.533 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:29.533 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:29.533 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:29.533 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:29.533 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:29.533 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:29.533 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:29.533 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.533 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:29.790 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:30.047 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:30.304 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:30.304 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:30.304 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:30.304 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:30.304 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:30.304 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.304 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:30.561 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:30.561 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:30.561 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:30.561 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:30.561 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.561 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:30.561 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:30.561 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:30.561 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.561 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.561 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.561 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:30.561 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:30.561 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:30.561 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:30.561 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.561 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:30.561 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.561 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.818 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:30.818 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:30.818 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:30.818 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:30.818 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:30.818 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.818 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:30.818 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:30.818 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:30.818 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:30.818 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:30.818 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:30.818 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:30.818 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.818 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:31.075 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:31.075 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:31.075 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:31.075 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:31.075 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.075 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:31.332 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:31.332 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:31.332 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.332 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.332 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.332 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.332 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:31.332 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.332 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.332 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.332 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:31.332 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:31.332 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.332 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.332 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.332 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.332 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:31.594 rmmod nvme_tcp 00:11:31.594 rmmod nvme_fabrics 00:11:31.594 rmmod nvme_keyring 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 4187065 ']' 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 4187065 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 4187065 ']' 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 4187065 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.594 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4187065 00:11:31.594 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.594 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.594 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4187065' 00:11:31.594 killing process with pid 4187065 00:11:31.594 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 4187065 00:11:31.594 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 4187065 00:11:31.854 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:31.854 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:31.854 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:31.854 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:31.854 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:31.854 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:31.854 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:31.854 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:31.854 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:31.854 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.854 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.854 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.758 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:33.758 00:11:33.758 real 0m11.820s 00:11:33.758 user 0m13.132s 00:11:33.758 sys 0m5.792s 00:11:33.758 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.758 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.758 ************************************ 00:11:33.758 END TEST nvmf_referrals 00:11:33.758 ************************************ 00:11:33.758 09:49:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:33.758 09:49:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:33.758 09:49:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.758 09:49:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:34.017 ************************************ 00:11:34.017 START TEST nvmf_connect_disconnect 00:11:34.017 ************************************ 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:34.017 * Looking for test storage... 00:11:34.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:34.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.017 --rc genhtml_branch_coverage=1 00:11:34.017 --rc genhtml_function_coverage=1 00:11:34.017 --rc genhtml_legend=1 00:11:34.017 --rc geninfo_all_blocks=1 00:11:34.017 --rc geninfo_unexecuted_blocks=1 00:11:34.017 00:11:34.017 ' 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:34.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.017 --rc genhtml_branch_coverage=1 00:11:34.017 --rc genhtml_function_coverage=1 00:11:34.017 --rc genhtml_legend=1 00:11:34.017 --rc geninfo_all_blocks=1 00:11:34.017 --rc geninfo_unexecuted_blocks=1 00:11:34.017 00:11:34.017 ' 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:34.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.017 --rc genhtml_branch_coverage=1 00:11:34.017 --rc genhtml_function_coverage=1 00:11:34.017 --rc genhtml_legend=1 00:11:34.017 --rc geninfo_all_blocks=1 00:11:34.017 --rc geninfo_unexecuted_blocks=1 00:11:34.017 00:11:34.017 ' 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:34.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.017 --rc genhtml_branch_coverage=1 00:11:34.017 --rc genhtml_function_coverage=1 00:11:34.017 --rc genhtml_legend=1 00:11:34.017 --rc geninfo_all_blocks=1 00:11:34.017 --rc geninfo_unexecuted_blocks=1 00:11:34.017 00:11:34.017 ' 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:34.017 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:34.018 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:40.583 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:40.583 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:40.583 Found net devices under 0000:af:00.0: cvl_0_0 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:40.583 Found net devices under 0000:af:00.1: cvl_0_1 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.583 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.584 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:40.584 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:40.584 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:40.584 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:40.584 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:40.584 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:40.584 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:40.584 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.584 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:40.584 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:40.584 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:40.584 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:40.842 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:40.842 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:40.842 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:40.842 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:40.842 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:40.842 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:40.842 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:40.842 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:40.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:11:40.842 00:11:40.842 --- 10.0.0.2 ping statistics --- 00:11:40.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.842 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:11:40.842 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:40.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:11:40.842 00:11:40.842 --- 10.0.0.1 ping statistics --- 00:11:40.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.842 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:11:40.842 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.842 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:40.842 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:40.842 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.842 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:40.843 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:40.843 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.843 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:40.843 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:40.843 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:40.843 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:40.843 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:40.843 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:40.843 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=4191519 00:11:40.843 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:40.843 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 4191519 00:11:40.843 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 4191519 ']' 00:11:40.843 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.843 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.843 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.843 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.843 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.100 [2024-12-11 09:49:50.441205] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:11:41.100 [2024-12-11 09:49:50.441274] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.100 [2024-12-11 09:49:50.526661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.100 [2024-12-11 09:49:50.566654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.100 [2024-12-11 09:49:50.566691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.100 [2024-12-11 09:49:50.566698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.100 [2024-12-11 09:49:50.566704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.100 [2024-12-11 09:49:50.566709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.100 [2024-12-11 09:49:50.568104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.100 [2024-12-11 09:49:50.568134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.100 [2024-12-11 09:49:50.568253] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.100 [2024-12-11 09:49:50.568253] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.100 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.100 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:41.100 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:41.100 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:41.100 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.358 [2024-12-11 09:49:50.717821] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.358 [2024-12-11 09:49:50.789228] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:41.358 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:44.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.764 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:57.764 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:57.764 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:57.764 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:57.765 rmmod nvme_tcp 00:11:57.765 rmmod nvme_fabrics 00:11:57.765 rmmod nvme_keyring 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 4191519 ']' 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 4191519 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 4191519 ']' 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 4191519 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4191519 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4191519' 00:11:57.765 killing process with pid 4191519 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 4191519 00:11:57.765 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 4191519 00:11:58.024 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:58.024 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:58.024 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:58.024 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:58.024 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:58.024 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:58.024 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:58.024 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:58.024 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:58.024 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.024 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.024 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:00.560 00:12:00.560 real 0m26.229s 00:12:00.560 user 1m9.222s 00:12:00.560 sys 0m6.514s 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.560 ************************************ 00:12:00.560 END TEST nvmf_connect_disconnect 00:12:00.560 ************************************ 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:00.560 ************************************ 00:12:00.560 START TEST nvmf_multitarget 00:12:00.560 ************************************ 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:00.560 * Looking for test storage... 00:12:00.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:00.560 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:00.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.561 --rc genhtml_branch_coverage=1 00:12:00.561 --rc genhtml_function_coverage=1 00:12:00.561 --rc genhtml_legend=1 00:12:00.561 --rc geninfo_all_blocks=1 00:12:00.561 --rc geninfo_unexecuted_blocks=1 00:12:00.561 00:12:00.561 ' 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:00.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.561 --rc genhtml_branch_coverage=1 00:12:00.561 --rc genhtml_function_coverage=1 00:12:00.561 --rc genhtml_legend=1 00:12:00.561 --rc geninfo_all_blocks=1 00:12:00.561 --rc geninfo_unexecuted_blocks=1 00:12:00.561 00:12:00.561 ' 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:00.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.561 --rc genhtml_branch_coverage=1 00:12:00.561 --rc genhtml_function_coverage=1 00:12:00.561 --rc genhtml_legend=1 00:12:00.561 --rc geninfo_all_blocks=1 00:12:00.561 --rc geninfo_unexecuted_blocks=1 00:12:00.561 00:12:00.561 ' 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:00.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.561 --rc genhtml_branch_coverage=1 00:12:00.561 --rc genhtml_function_coverage=1 00:12:00.561 --rc genhtml_legend=1 00:12:00.561 --rc geninfo_all_blocks=1 00:12:00.561 --rc geninfo_unexecuted_blocks=1 00:12:00.561 00:12:00.561 ' 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:00.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:00.561 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:07.132 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.132 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:07.133 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:07.133 Found net devices under 0000:af:00.0: cvl_0_0 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:07.133 Found net devices under 0000:af:00.1: cvl_0_1 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:12:07.133 00:12:07.133 --- 10.0.0.2 ping statistics --- 00:12:07.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.133 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:12:07.133 00:12:07.133 --- 10.0.0.1 ping statistics --- 00:12:07.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.133 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=4703 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 4703 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 4703 ']' 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.133 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:07.392 [2024-12-11 09:50:16.743393] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:12:07.392 [2024-12-11 09:50:16.743447] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.392 [2024-12-11 09:50:16.827656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.392 [2024-12-11 09:50:16.868250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.392 [2024-12-11 09:50:16.868287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.392 [2024-12-11 09:50:16.868294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.392 [2024-12-11 09:50:16.868300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.392 [2024-12-11 09:50:16.868306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.392 [2024-12-11 09:50:16.869777] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.392 [2024-12-11 09:50:16.869885] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.392 [2024-12-11 09:50:16.869990] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.392 [2024-12-11 09:50:16.869991] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.327 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.327 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:08.327 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:08.327 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:08.327 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:08.327 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.327 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:08.327 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:08.327 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:08.327 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:08.327 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:08.327 "nvmf_tgt_1" 00:12:08.327 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:08.585 "nvmf_tgt_2" 00:12:08.585 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:08.585 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:08.585 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:08.585 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:08.585 true 00:12:08.585 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:08.843 true 00:12:08.843 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:08.843 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:08.843 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:08.843 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:08.843 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:08.843 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:08.843 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:08.843 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:08.843 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:08.843 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:08.843 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:08.843 rmmod nvme_tcp 00:12:08.843 rmmod nvme_fabrics 00:12:09.102 rmmod nvme_keyring 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 4703 ']' 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 4703 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 4703 ']' 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 4703 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4703 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4703' 00:12:09.102 killing process with pid 4703 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 4703 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 4703 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:09.102 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:09.361 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:09.361 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:09.361 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.361 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.361 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.266 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:11.266 00:12:11.266 real 0m11.089s 00:12:11.266 user 0m10.111s 00:12:11.266 sys 0m5.580s 00:12:11.266 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.266 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:11.266 ************************************ 00:12:11.266 END TEST nvmf_multitarget 00:12:11.266 ************************************ 00:12:11.266 09:50:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:11.266 09:50:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:11.266 09:50:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.266 09:50:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:11.266 ************************************ 00:12:11.266 START TEST nvmf_rpc 00:12:11.266 ************************************ 00:12:11.266 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:11.526 * Looking for test storage... 00:12:11.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:11.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.526 --rc genhtml_branch_coverage=1 00:12:11.526 --rc genhtml_function_coverage=1 00:12:11.526 --rc genhtml_legend=1 00:12:11.526 --rc geninfo_all_blocks=1 00:12:11.526 --rc geninfo_unexecuted_blocks=1 00:12:11.526 00:12:11.526 ' 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:11.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.526 --rc genhtml_branch_coverage=1 00:12:11.526 --rc genhtml_function_coverage=1 00:12:11.526 --rc genhtml_legend=1 00:12:11.526 --rc geninfo_all_blocks=1 00:12:11.526 --rc geninfo_unexecuted_blocks=1 00:12:11.526 00:12:11.526 ' 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:11.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.526 --rc genhtml_branch_coverage=1 00:12:11.526 --rc genhtml_function_coverage=1 00:12:11.526 --rc genhtml_legend=1 00:12:11.526 --rc geninfo_all_blocks=1 00:12:11.526 --rc geninfo_unexecuted_blocks=1 00:12:11.526 00:12:11.526 ' 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:11.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.526 --rc genhtml_branch_coverage=1 00:12:11.526 --rc genhtml_function_coverage=1 00:12:11.526 --rc genhtml_legend=1 00:12:11.526 --rc geninfo_all_blocks=1 00:12:11.526 --rc geninfo_unexecuted_blocks=1 00:12:11.526 00:12:11.526 ' 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.526 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:11.526 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.526 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.526 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.526 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.526 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.526 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.526 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.526 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.526 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.526 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.526 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:11.526 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:11.526 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.526 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.526 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.526 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.526 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:11.526 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:11.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:11.527 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.095 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:18.096 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:18.096 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:18.096 Found net devices under 0000:af:00.0: cvl_0_0 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:18.096 Found net devices under 0000:af:00.1: cvl_0_1 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:18.096 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:18.353 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:18.353 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:18.353 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:18.353 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:18.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:12:18.353 00:12:18.353 --- 10.0.0.2 ping statistics --- 00:12:18.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.354 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:18.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:12:18.354 00:12:18.354 --- 10.0.0.1 ping statistics --- 00:12:18.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.354 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=9010 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 9010 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 9010 ']' 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.354 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.354 [2024-12-11 09:50:27.858283] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:12:18.354 [2024-12-11 09:50:27.858327] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.611 [2024-12-11 09:50:27.941935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.611 [2024-12-11 09:50:27.986124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.611 [2024-12-11 09:50:27.986157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.611 [2024-12-11 09:50:27.986164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.611 [2024-12-11 09:50:27.986170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.611 [2024-12-11 09:50:27.986175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.611 [2024-12-11 09:50:27.987559] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.611 [2024-12-11 09:50:27.987588] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.611 [2024-12-11 09:50:27.987613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.611 [2024-12-11 09:50:27.987614] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.176 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.176 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:19.176 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.176 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:19.176 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.176 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.176 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:19.176 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.176 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.176 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.433 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:19.433 "tick_rate": 2100000000, 00:12:19.433 "poll_groups": [ 00:12:19.433 { 00:12:19.433 "name": "nvmf_tgt_poll_group_000", 00:12:19.433 "admin_qpairs": 0, 00:12:19.433 "io_qpairs": 0, 00:12:19.433 "current_admin_qpairs": 0, 00:12:19.433 "current_io_qpairs": 0, 00:12:19.433 "pending_bdev_io": 0, 00:12:19.433 "completed_nvme_io": 0, 00:12:19.433 "transports": [] 00:12:19.433 }, 00:12:19.433 { 00:12:19.433 "name": "nvmf_tgt_poll_group_001", 00:12:19.433 "admin_qpairs": 0, 00:12:19.433 "io_qpairs": 0, 00:12:19.433 "current_admin_qpairs": 0, 00:12:19.433 "current_io_qpairs": 0, 00:12:19.433 "pending_bdev_io": 0, 00:12:19.433 "completed_nvme_io": 0, 00:12:19.433 "transports": [] 00:12:19.433 }, 00:12:19.433 { 00:12:19.433 "name": "nvmf_tgt_poll_group_002", 00:12:19.433 "admin_qpairs": 0, 00:12:19.433 "io_qpairs": 0, 00:12:19.433 "current_admin_qpairs": 0, 00:12:19.433 "current_io_qpairs": 0, 00:12:19.433 "pending_bdev_io": 0, 00:12:19.433 "completed_nvme_io": 0, 00:12:19.433 "transports": [] 00:12:19.433 }, 00:12:19.433 { 00:12:19.433 "name": "nvmf_tgt_poll_group_003", 00:12:19.433 "admin_qpairs": 0, 00:12:19.433 "io_qpairs": 0, 00:12:19.433 "current_admin_qpairs": 0, 00:12:19.433 "current_io_qpairs": 0, 00:12:19.433 "pending_bdev_io": 0, 00:12:19.433 "completed_nvme_io": 0, 00:12:19.433 "transports": [] 00:12:19.433 } 00:12:19.433 ] 00:12:19.433 }' 00:12:19.433 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:19.433 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:19.433 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:19.433 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:19.433 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:19.433 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:19.433 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:19.433 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:19.433 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.433 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.433 [2024-12-11 09:50:28.839528] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.433 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.433 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:19.433 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.433 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.433 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.433 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:19.433 "tick_rate": 2100000000, 00:12:19.433 "poll_groups": [ 00:12:19.433 { 00:12:19.433 "name": "nvmf_tgt_poll_group_000", 00:12:19.433 "admin_qpairs": 0, 00:12:19.433 "io_qpairs": 0, 00:12:19.433 "current_admin_qpairs": 0, 00:12:19.433 "current_io_qpairs": 0, 00:12:19.433 "pending_bdev_io": 0, 00:12:19.433 "completed_nvme_io": 0, 00:12:19.433 "transports": [ 00:12:19.433 { 00:12:19.433 "trtype": "TCP" 00:12:19.433 } 00:12:19.433 ] 00:12:19.433 }, 00:12:19.433 { 00:12:19.433 "name": "nvmf_tgt_poll_group_001", 00:12:19.433 "admin_qpairs": 0, 00:12:19.433 "io_qpairs": 0, 00:12:19.433 "current_admin_qpairs": 0, 00:12:19.433 "current_io_qpairs": 0, 00:12:19.433 "pending_bdev_io": 0, 00:12:19.433 "completed_nvme_io": 0, 00:12:19.433 "transports": [ 00:12:19.433 { 00:12:19.433 "trtype": "TCP" 00:12:19.433 } 00:12:19.433 ] 00:12:19.433 }, 00:12:19.433 { 00:12:19.433 "name": "nvmf_tgt_poll_group_002", 00:12:19.433 "admin_qpairs": 0, 00:12:19.433 "io_qpairs": 0, 00:12:19.433 "current_admin_qpairs": 0, 00:12:19.433 "current_io_qpairs": 0, 00:12:19.433 "pending_bdev_io": 0, 00:12:19.433 "completed_nvme_io": 0, 00:12:19.433 "transports": [ 00:12:19.433 { 00:12:19.433 "trtype": "TCP" 00:12:19.433 } 00:12:19.433 ] 00:12:19.433 }, 00:12:19.433 { 00:12:19.433 "name": "nvmf_tgt_poll_group_003", 00:12:19.433 "admin_qpairs": 0, 00:12:19.433 "io_qpairs": 0, 00:12:19.433 "current_admin_qpairs": 0, 00:12:19.433 "current_io_qpairs": 0, 00:12:19.434 "pending_bdev_io": 0, 00:12:19.434 "completed_nvme_io": 0, 00:12:19.434 "transports": [ 00:12:19.434 { 00:12:19.434 "trtype": "TCP" 00:12:19.434 } 00:12:19.434 ] 00:12:19.434 } 00:12:19.434 ] 00:12:19.434 }' 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.434 Malloc1 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.434 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.434 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.691 [2024-12-11 09:50:29.019968] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:19.691 [2024-12-11 09:50:29.058596] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:12:19.691 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:19.691 could not add new controller: failed to write to nvme-fabrics device 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:19.691 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:19.692 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.692 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.692 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.692 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.061 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.061 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:21.061 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.061 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:21.061 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.970 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:22.971 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.971 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:22.971 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:22.971 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.971 [2024-12-11 09:50:32.402337] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:12:22.971 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:22.971 could not add new controller: failed to write to nvme-fabrics device 00:12:22.971 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:22.971 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:22.971 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:22.971 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:22.971 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:22.971 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.971 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.971 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.971 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.968 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:23.968 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:23.968 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.968 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:23.968 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:26.493 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:26.493 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:26.493 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.494 [2024-12-11 09:50:35.693210] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.494 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.426 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.426 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:27.426 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.426 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:27.426 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:29.322 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:29.322 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:29.323 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.323 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:29.323 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.323 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:29.323 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.580 [2024-12-11 09:50:38.958315] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.580 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.581 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.581 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.581 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.581 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.581 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.513 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:30.513 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:30.513 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.513 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:30.513 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.039 [2024-12-11 09:50:42.247143] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.039 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.971 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.971 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:33.971 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.971 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:33.971 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:35.867 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:35.867 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:35.867 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.867 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:35.867 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.867 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:35.867 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.124 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.125 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.125 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.125 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.125 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.125 [2024-12-11 09:50:45.609794] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.125 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.125 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:36.125 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.125 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.125 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.125 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.125 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.125 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.125 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.125 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.496 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:37.496 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:37.496 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.496 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:37.496 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.393 [2024-12-11 09:50:48.915146] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.393 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.764 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.764 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:40.764 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.764 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:40.764 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:42.660 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:42.660 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:42.660 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.660 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:42.660 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.660 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:42.660 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.918 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 [2024-12-11 09:50:52.332898] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 [2024-12-11 09:50:52.380993] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 [2024-12-11 09:50:52.429136] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 [2024-12-11 09:50:52.477317] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.919 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.177 [2024-12-11 09:50:52.525512] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:43.177 "tick_rate": 2100000000, 00:12:43.177 "poll_groups": [ 00:12:43.177 { 00:12:43.177 "name": "nvmf_tgt_poll_group_000", 00:12:43.177 "admin_qpairs": 2, 00:12:43.177 "io_qpairs": 168, 00:12:43.177 "current_admin_qpairs": 0, 00:12:43.177 "current_io_qpairs": 0, 00:12:43.177 "pending_bdev_io": 0, 00:12:43.177 "completed_nvme_io": 306, 00:12:43.177 "transports": [ 00:12:43.177 { 00:12:43.177 "trtype": "TCP" 00:12:43.177 } 00:12:43.177 ] 00:12:43.177 }, 00:12:43.177 { 00:12:43.177 "name": "nvmf_tgt_poll_group_001", 00:12:43.177 "admin_qpairs": 2, 00:12:43.177 "io_qpairs": 168, 00:12:43.177 "current_admin_qpairs": 0, 00:12:43.177 "current_io_qpairs": 0, 00:12:43.177 "pending_bdev_io": 0, 00:12:43.177 "completed_nvme_io": 262, 00:12:43.177 "transports": [ 00:12:43.177 { 00:12:43.177 "trtype": "TCP" 00:12:43.177 } 00:12:43.177 ] 00:12:43.177 }, 00:12:43.177 { 00:12:43.177 "name": "nvmf_tgt_poll_group_002", 00:12:43.177 "admin_qpairs": 1, 00:12:43.177 "io_qpairs": 168, 00:12:43.177 "current_admin_qpairs": 0, 00:12:43.177 "current_io_qpairs": 0, 00:12:43.177 "pending_bdev_io": 0, 00:12:43.177 "completed_nvme_io": 218, 00:12:43.177 "transports": [ 00:12:43.177 { 00:12:43.177 "trtype": "TCP" 00:12:43.177 } 00:12:43.177 ] 00:12:43.177 }, 00:12:43.177 { 00:12:43.177 "name": "nvmf_tgt_poll_group_003", 00:12:43.177 "admin_qpairs": 2, 00:12:43.177 "io_qpairs": 168, 00:12:43.177 "current_admin_qpairs": 0, 00:12:43.177 "current_io_qpairs": 0, 00:12:43.177 "pending_bdev_io": 0, 00:12:43.177 "completed_nvme_io": 236, 00:12:43.177 "transports": [ 00:12:43.177 { 00:12:43.177 "trtype": "TCP" 00:12:43.177 } 00:12:43.177 ] 00:12:43.177 } 00:12:43.177 ] 00:12:43.177 }' 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:43.177 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:43.178 rmmod nvme_tcp 00:12:43.178 rmmod nvme_fabrics 00:12:43.178 rmmod nvme_keyring 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 9010 ']' 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 9010 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 9010 ']' 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 9010 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.178 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 9010 00:12:43.436 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.436 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.436 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 9010' 00:12:43.436 killing process with pid 9010 00:12:43.436 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 9010 00:12:43.436 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 9010 00:12:43.436 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:43.436 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:43.436 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:43.436 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:43.436 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:43.436 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:43.436 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:43.436 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:43.436 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:43.436 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.436 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.436 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.972 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:45.972 00:12:45.972 real 0m34.216s 00:12:45.972 user 1m41.703s 00:12:45.972 sys 0m7.070s 00:12:45.972 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.972 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.972 ************************************ 00:12:45.972 END TEST nvmf_rpc 00:12:45.972 ************************************ 00:12:45.972 09:50:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:45.972 09:50:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:45.972 09:50:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.972 09:50:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:45.972 ************************************ 00:12:45.972 START TEST nvmf_invalid 00:12:45.972 ************************************ 00:12:45.972 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:45.972 * Looking for test storage... 00:12:45.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:45.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.973 --rc genhtml_branch_coverage=1 00:12:45.973 --rc genhtml_function_coverage=1 00:12:45.973 --rc genhtml_legend=1 00:12:45.973 --rc geninfo_all_blocks=1 00:12:45.973 --rc geninfo_unexecuted_blocks=1 00:12:45.973 00:12:45.973 ' 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:45.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.973 --rc genhtml_branch_coverage=1 00:12:45.973 --rc genhtml_function_coverage=1 00:12:45.973 --rc genhtml_legend=1 00:12:45.973 --rc geninfo_all_blocks=1 00:12:45.973 --rc geninfo_unexecuted_blocks=1 00:12:45.973 00:12:45.973 ' 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:45.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.973 --rc genhtml_branch_coverage=1 00:12:45.973 --rc genhtml_function_coverage=1 00:12:45.973 --rc genhtml_legend=1 00:12:45.973 --rc geninfo_all_blocks=1 00:12:45.973 --rc geninfo_unexecuted_blocks=1 00:12:45.973 00:12:45.973 ' 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:45.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.973 --rc genhtml_branch_coverage=1 00:12:45.973 --rc genhtml_function_coverage=1 00:12:45.973 --rc genhtml_legend=1 00:12:45.973 --rc geninfo_all_blocks=1 00:12:45.973 --rc geninfo_unexecuted_blocks=1 00:12:45.973 00:12:45.973 ' 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:45.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:45.973 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:45.974 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:45.974 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:45.974 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:45.974 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:45.974 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:45.974 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.974 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:45.974 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:45.974 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:45.974 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.974 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.974 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.974 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:45.974 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:45.974 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:45.974 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:52.545 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:52.546 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:52.546 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:52.546 Found net devices under 0000:af:00.0: cvl_0_0 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:52.546 Found net devices under 0000:af:00.1: cvl_0_1 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.546 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.546 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.546 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.546 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:52.546 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.546 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.546 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.546 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:52.804 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:52.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:12:52.804 00:12:52.804 --- 10.0.0.2 ping statistics --- 00:12:52.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.804 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:12:52.804 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:12:52.804 00:12:52.804 --- 10.0.0.1 ping statistics --- 00:12:52.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.804 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:12:52.804 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.804 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:52.804 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:52.804 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.804 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:52.804 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:52.804 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.804 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:52.804 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:52.804 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:52.804 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:52.804 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:52.804 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:52.804 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=17359 00:12:52.805 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 17359 00:12:52.805 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.805 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 17359 ']' 00:12:52.805 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.805 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.805 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.805 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.805 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:52.805 [2024-12-11 09:51:02.230937] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:12:52.805 [2024-12-11 09:51:02.230980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.805 [2024-12-11 09:51:02.316246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.805 [2024-12-11 09:51:02.355442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.805 [2024-12-11 09:51:02.355478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.805 [2024-12-11 09:51:02.355485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.805 [2024-12-11 09:51:02.355491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.805 [2024-12-11 09:51:02.355495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.805 [2024-12-11 09:51:02.356903] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.805 [2024-12-11 09:51:02.357016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.805 [2024-12-11 09:51:02.357145] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.805 [2024-12-11 09:51:02.357146] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.062 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.062 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:53.062 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:53.062 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:53.062 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:53.062 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.062 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:53.062 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5957 00:12:53.320 [2024-12-11 09:51:02.671463] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:53.320 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:53.320 { 00:12:53.320 "nqn": "nqn.2016-06.io.spdk:cnode5957", 00:12:53.320 "tgt_name": "foobar", 00:12:53.320 "method": "nvmf_create_subsystem", 00:12:53.320 "req_id": 1 00:12:53.320 } 00:12:53.320 Got JSON-RPC error response 00:12:53.320 response: 00:12:53.320 { 00:12:53.320 "code": -32603, 00:12:53.320 "message": "Unable to find target foobar" 00:12:53.320 }' 00:12:53.320 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:53.320 { 00:12:53.320 "nqn": "nqn.2016-06.io.spdk:cnode5957", 00:12:53.320 "tgt_name": "foobar", 00:12:53.320 "method": "nvmf_create_subsystem", 00:12:53.320 "req_id": 1 00:12:53.320 } 00:12:53.320 Got JSON-RPC error response 00:12:53.320 response: 00:12:53.320 { 00:12:53.320 "code": -32603, 00:12:53.320 "message": "Unable to find target foobar" 00:12:53.320 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:53.320 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:53.320 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8475 00:12:53.320 [2024-12-11 09:51:02.880165] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8475: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:53.578 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:53.578 { 00:12:53.578 "nqn": "nqn.2016-06.io.spdk:cnode8475", 00:12:53.578 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:53.578 "method": "nvmf_create_subsystem", 00:12:53.578 "req_id": 1 00:12:53.578 } 00:12:53.578 Got JSON-RPC error response 00:12:53.578 response: 00:12:53.578 { 00:12:53.578 "code": -32602, 00:12:53.578 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:53.578 }' 00:12:53.578 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:53.578 { 00:12:53.578 "nqn": "nqn.2016-06.io.spdk:cnode8475", 00:12:53.578 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:53.578 "method": "nvmf_create_subsystem", 00:12:53.578 "req_id": 1 00:12:53.578 } 00:12:53.578 Got JSON-RPC error response 00:12:53.578 response: 00:12:53.578 { 00:12:53.578 "code": -32602, 00:12:53.578 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:53.578 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:53.578 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:53.578 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode22324 00:12:53.578 [2024-12-11 09:51:03.084833] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22324: invalid model number 'SPDK_Controller' 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:53.578 { 00:12:53.578 "nqn": "nqn.2016-06.io.spdk:cnode22324", 00:12:53.578 "model_number": "SPDK_Controller\u001f", 00:12:53.578 "method": "nvmf_create_subsystem", 00:12:53.578 "req_id": 1 00:12:53.578 } 00:12:53.578 Got JSON-RPC error response 00:12:53.578 response: 00:12:53.578 { 00:12:53.578 "code": -32602, 00:12:53.578 "message": "Invalid MN SPDK_Controller\u001f" 00:12:53.578 }' 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:53.578 { 00:12:53.578 "nqn": "nqn.2016-06.io.spdk:cnode22324", 00:12:53.578 "model_number": "SPDK_Controller\u001f", 00:12:53.578 "method": "nvmf_create_subsystem", 00:12:53.578 "req_id": 1 00:12:53.578 } 00:12:53.578 Got JSON-RPC error response 00:12:53.578 response: 00:12:53.578 { 00:12:53.578 "code": -32602, 00:12:53.578 "message": "Invalid MN SPDK_Controller\u001f" 00:12:53.578 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.578 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:53.836 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 7 == \- ]] 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '7+kokO$PWn'\'']KFZAl5/hh' 00:12:53.837 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '7+kokO$PWn'\'']KFZAl5/hh' nqn.2016-06.io.spdk:cnode3651 00:12:54.095 [2024-12-11 09:51:03.442031] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3651: invalid serial number '7+kokO$PWn']KFZAl5/hh' 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:54.095 { 00:12:54.095 "nqn": "nqn.2016-06.io.spdk:cnode3651", 00:12:54.095 "serial_number": "7+kokO$PWn'\'']KFZAl5/hh", 00:12:54.095 "method": "nvmf_create_subsystem", 00:12:54.095 "req_id": 1 00:12:54.095 } 00:12:54.095 Got JSON-RPC error response 00:12:54.095 response: 00:12:54.095 { 00:12:54.095 "code": -32602, 00:12:54.095 "message": "Invalid SN 7+kokO$PWn'\'']KFZAl5/hh" 00:12:54.095 }' 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:54.095 { 00:12:54.095 "nqn": "nqn.2016-06.io.spdk:cnode3651", 00:12:54.095 "serial_number": "7+kokO$PWn']KFZAl5/hh", 00:12:54.095 "method": "nvmf_create_subsystem", 00:12:54.095 "req_id": 1 00:12:54.095 } 00:12:54.095 Got JSON-RPC error response 00:12:54.095 response: 00:12:54.095 { 00:12:54.095 "code": -32602, 00:12:54.095 "message": "Invalid SN 7+kokO$PWn']KFZAl5/hh" 00:12:54.095 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.095 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:54.096 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:54.097 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:54.097 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.097 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.097 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:54.097 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:54.354 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:54.354 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.354 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.354 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:54.354 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:54.354 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:54.354 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.354 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.354 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:54.354 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ q == \- ]] 00:12:54.355 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'q[}J+Pv /dev/null' 00:12:56.417 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:58.952 00:12:58.952 real 0m12.933s 00:12:58.952 user 0m18.911s 00:12:58.952 sys 0m6.014s 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:58.952 ************************************ 00:12:58.952 END TEST nvmf_invalid 00:12:58.952 ************************************ 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:58.952 ************************************ 00:12:58.952 START TEST nvmf_connect_stress 00:12:58.952 ************************************ 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:58.952 * Looking for test storage... 00:12:58.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:58.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.952 --rc genhtml_branch_coverage=1 00:12:58.952 --rc genhtml_function_coverage=1 00:12:58.952 --rc genhtml_legend=1 00:12:58.952 --rc geninfo_all_blocks=1 00:12:58.952 --rc geninfo_unexecuted_blocks=1 00:12:58.952 00:12:58.952 ' 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:58.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.952 --rc genhtml_branch_coverage=1 00:12:58.952 --rc genhtml_function_coverage=1 00:12:58.952 --rc genhtml_legend=1 00:12:58.952 --rc geninfo_all_blocks=1 00:12:58.952 --rc geninfo_unexecuted_blocks=1 00:12:58.952 00:12:58.952 ' 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:58.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.952 --rc genhtml_branch_coverage=1 00:12:58.952 --rc genhtml_function_coverage=1 00:12:58.952 --rc genhtml_legend=1 00:12:58.952 --rc geninfo_all_blocks=1 00:12:58.952 --rc geninfo_unexecuted_blocks=1 00:12:58.952 00:12:58.952 ' 00:12:58.952 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:58.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.953 --rc genhtml_branch_coverage=1 00:12:58.953 --rc genhtml_function_coverage=1 00:12:58.953 --rc genhtml_legend=1 00:12:58.953 --rc geninfo_all_blocks=1 00:12:58.953 --rc geninfo_unexecuted_blocks=1 00:12:58.953 00:12:58.953 ' 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:58.953 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:05.521 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:05.521 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:05.521 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:05.522 Found net devices under 0000:af:00.0: cvl_0_0 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:05.522 Found net devices under 0000:af:00.1: cvl_0_1 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:05.522 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:05.522 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:05.522 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:05.522 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:05.522 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:05.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:05.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:13:05.781 00:13:05.781 --- 10.0.0.2 ping statistics --- 00:13:05.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.781 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:05.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:05.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:13:05.781 00:13:05.781 --- 10.0.0.1 ping statistics --- 00:13:05.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.781 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=22432 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 22432 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 22432 ']' 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:05.781 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.781 [2024-12-11 09:51:15.239342] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:13:05.781 [2024-12-11 09:51:15.239387] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.781 [2024-12-11 09:51:15.323081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:06.039 [2024-12-11 09:51:15.361907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.039 [2024-12-11 09:51:15.361939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.039 [2024-12-11 09:51:15.361947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.039 [2024-12-11 09:51:15.361953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.039 [2024-12-11 09:51:15.361958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.039 [2024-12-11 09:51:15.363398] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.039 [2024-12-11 09:51:15.363507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.039 [2024-12-11 09:51:15.363508] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.605 [2024-12-11 09:51:16.109992] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.605 [2024-12-11 09:51:16.130199] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.605 NULL1 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=22542 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.605 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.863 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.121 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.121 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:07.121 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.121 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.121 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.378 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.378 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:07.378 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.378 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.378 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.635 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.893 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:07.893 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.893 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.893 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.150 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.150 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:08.150 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.150 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.150 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.408 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.408 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:08.408 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.408 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.408 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.666 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.666 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:08.666 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.666 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.666 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.230 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.230 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:09.230 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.230 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.230 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.487 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.487 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:09.487 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.487 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.488 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.745 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.745 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:09.745 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.745 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.745 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.002 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.002 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:10.002 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.002 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.002 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.260 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.260 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:10.260 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.260 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.260 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.824 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.824 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:10.824 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.824 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.824 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.081 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.081 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:11.081 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.081 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.081 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.339 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.339 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:11.339 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.339 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.339 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.596 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.596 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:11.596 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.596 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.596 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.161 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.161 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:12.161 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.161 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.161 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.419 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.419 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:12.419 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.419 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.419 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.676 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.676 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:12.676 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.676 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.676 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.933 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.933 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:12.933 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.933 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.933 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.191 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.191 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:13.191 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.191 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.191 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.755 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.755 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:13.755 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.755 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.755 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.012 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.012 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:14.012 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.012 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.012 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.269 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.269 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:14.269 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.269 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.269 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.526 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.526 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:14.526 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.526 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.526 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.094 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.094 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:15.094 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.094 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.094 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.370 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.370 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:15.370 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.370 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.370 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.660 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.660 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:15.660 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.660 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.660 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.931 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.931 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:15.931 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.931 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.931 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.194 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.194 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:16.194 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.194 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.194 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.452 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.452 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:16.452 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.452 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.452 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.017 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 22542 00:13:17.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (22542) - No such process 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 22542 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:17.017 rmmod nvme_tcp 00:13:17.017 rmmod nvme_fabrics 00:13:17.017 rmmod nvme_keyring 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 22432 ']' 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 22432 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 22432 ']' 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 22432 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 22432 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 22432' 00:13:17.017 killing process with pid 22432 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 22432 00:13:17.017 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 22432 00:13:17.275 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:17.275 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:17.275 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:17.276 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:17.276 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:17.276 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:17.276 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:17.276 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:17.276 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:17.276 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.276 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.276 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.180 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:19.180 00:13:19.180 real 0m20.572s 00:13:19.180 user 0m41.621s 00:13:19.180 sys 0m9.206s 00:13:19.180 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.180 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.180 ************************************ 00:13:19.180 END TEST nvmf_connect_stress 00:13:19.180 ************************************ 00:13:19.180 09:51:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:19.180 09:51:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:19.180 09:51:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.180 09:51:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:19.438 ************************************ 00:13:19.438 START TEST nvmf_fused_ordering 00:13:19.438 ************************************ 00:13:19.438 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:19.438 * Looking for test storage... 00:13:19.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.438 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:19.438 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:19.438 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:19.438 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:19.438 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:19.438 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:19.438 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:19.438 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:19.438 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:19.438 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:19.438 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:19.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.439 --rc genhtml_branch_coverage=1 00:13:19.439 --rc genhtml_function_coverage=1 00:13:19.439 --rc genhtml_legend=1 00:13:19.439 --rc geninfo_all_blocks=1 00:13:19.439 --rc geninfo_unexecuted_blocks=1 00:13:19.439 00:13:19.439 ' 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:19.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.439 --rc genhtml_branch_coverage=1 00:13:19.439 --rc genhtml_function_coverage=1 00:13:19.439 --rc genhtml_legend=1 00:13:19.439 --rc geninfo_all_blocks=1 00:13:19.439 --rc geninfo_unexecuted_blocks=1 00:13:19.439 00:13:19.439 ' 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:19.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.439 --rc genhtml_branch_coverage=1 00:13:19.439 --rc genhtml_function_coverage=1 00:13:19.439 --rc genhtml_legend=1 00:13:19.439 --rc geninfo_all_blocks=1 00:13:19.439 --rc geninfo_unexecuted_blocks=1 00:13:19.439 00:13:19.439 ' 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:19.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.439 --rc genhtml_branch_coverage=1 00:13:19.439 --rc genhtml_function_coverage=1 00:13:19.439 --rc genhtml_legend=1 00:13:19.439 --rc geninfo_all_blocks=1 00:13:19.439 --rc geninfo_unexecuted_blocks=1 00:13:19.439 00:13:19.439 ' 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:19.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:19.439 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:19.440 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:19.440 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:26.008 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:26.008 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:26.008 Found net devices under 0000:af:00.0: cvl_0_0 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:26.008 Found net devices under 0000:af:00.1: cvl_0_1 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:26.008 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.009 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.009 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.009 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:26.009 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.009 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.009 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:26.009 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:26.009 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.009 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.009 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:26.009 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:26.009 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.009 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.009 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.009 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.009 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:26.009 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:26.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:13:26.268 00:13:26.268 --- 10.0.0.2 ping statistics --- 00:13:26.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.268 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:13:26.268 00:13:26.268 --- 10.0.0.1 ping statistics --- 00:13:26.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.268 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=28110 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 28110 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 28110 ']' 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.268 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.268 [2024-12-11 09:51:35.750962] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:13:26.268 [2024-12-11 09:51:35.751006] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.268 [2024-12-11 09:51:35.831954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.533 [2024-12-11 09:51:35.869677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.533 [2024-12-11 09:51:35.869710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.533 [2024-12-11 09:51:35.869717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.533 [2024-12-11 09:51:35.869723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.533 [2024-12-11 09:51:35.869728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.533 [2024-12-11 09:51:35.870283] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.533 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.533 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:26.533 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:26.533 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:26.533 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.533 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.533 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:26.533 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.533 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.533 [2024-12-11 09:51:36.017539] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.533 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.533 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:26.533 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.533 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.533 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.533 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.533 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.533 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.533 [2024-12-11 09:51:36.037717] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.533 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.533 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:26.533 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.533 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.533 NULL1 00:13:26.534 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.534 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:26.534 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.534 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.534 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.534 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:26.534 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.534 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.534 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.534 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:26.534 [2024-12-11 09:51:36.096886] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:13:26.534 [2024-12-11 09:51:36.096917] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid28311 ] 00:13:27.099 Attached to nqn.2016-06.io.spdk:cnode1 00:13:27.099 Namespace ID: 1 size: 1GB 00:13:27.099 fused_ordering(0) 00:13:27.099 fused_ordering(1) 00:13:27.099 fused_ordering(2) 00:13:27.099 fused_ordering(3) 00:13:27.099 fused_ordering(4) 00:13:27.099 fused_ordering(5) 00:13:27.099 fused_ordering(6) 00:13:27.099 fused_ordering(7) 00:13:27.099 fused_ordering(8) 00:13:27.099 fused_ordering(9) 00:13:27.099 fused_ordering(10) 00:13:27.099 fused_ordering(11) 00:13:27.099 fused_ordering(12) 00:13:27.100 fused_ordering(13) 00:13:27.100 fused_ordering(14) 00:13:27.100 fused_ordering(15) 00:13:27.100 fused_ordering(16) 00:13:27.100 fused_ordering(17) 00:13:27.100 fused_ordering(18) 00:13:27.100 fused_ordering(19) 00:13:27.100 fused_ordering(20) 00:13:27.100 fused_ordering(21) 00:13:27.100 fused_ordering(22) 00:13:27.100 fused_ordering(23) 00:13:27.100 fused_ordering(24) 00:13:27.100 fused_ordering(25) 00:13:27.100 fused_ordering(26) 00:13:27.100 fused_ordering(27) 00:13:27.100 fused_ordering(28) 00:13:27.100 fused_ordering(29) 00:13:27.100 fused_ordering(30) 00:13:27.100 fused_ordering(31) 00:13:27.100 fused_ordering(32) 00:13:27.100 fused_ordering(33) 00:13:27.100 fused_ordering(34) 00:13:27.100 fused_ordering(35) 00:13:27.100 fused_ordering(36) 00:13:27.100 fused_ordering(37) 00:13:27.100 fused_ordering(38) 00:13:27.100 fused_ordering(39) 00:13:27.100 fused_ordering(40) 00:13:27.100 fused_ordering(41) 00:13:27.100 fused_ordering(42) 00:13:27.100 fused_ordering(43) 00:13:27.100 fused_ordering(44) 00:13:27.100 fused_ordering(45) 00:13:27.100 fused_ordering(46) 00:13:27.100 fused_ordering(47) 00:13:27.100 fused_ordering(48) 00:13:27.100 fused_ordering(49) 00:13:27.100 fused_ordering(50) 00:13:27.100 fused_ordering(51) 00:13:27.100 fused_ordering(52) 00:13:27.100 fused_ordering(53) 00:13:27.100 fused_ordering(54) 00:13:27.100 fused_ordering(55) 00:13:27.100 fused_ordering(56) 00:13:27.100 fused_ordering(57) 00:13:27.100 fused_ordering(58) 00:13:27.100 fused_ordering(59) 00:13:27.100 fused_ordering(60) 00:13:27.100 fused_ordering(61) 00:13:27.100 fused_ordering(62) 00:13:27.100 fused_ordering(63) 00:13:27.100 fused_ordering(64) 00:13:27.100 fused_ordering(65) 00:13:27.100 fused_ordering(66) 00:13:27.100 fused_ordering(67) 00:13:27.100 fused_ordering(68) 00:13:27.100 fused_ordering(69) 00:13:27.100 fused_ordering(70) 00:13:27.100 fused_ordering(71) 00:13:27.100 fused_ordering(72) 00:13:27.100 fused_ordering(73) 00:13:27.100 fused_ordering(74) 00:13:27.100 fused_ordering(75) 00:13:27.100 fused_ordering(76) 00:13:27.100 fused_ordering(77) 00:13:27.100 fused_ordering(78) 00:13:27.100 fused_ordering(79) 00:13:27.100 fused_ordering(80) 00:13:27.100 fused_ordering(81) 00:13:27.100 fused_ordering(82) 00:13:27.100 fused_ordering(83) 00:13:27.100 fused_ordering(84) 00:13:27.100 fused_ordering(85) 00:13:27.100 fused_ordering(86) 00:13:27.100 fused_ordering(87) 00:13:27.100 fused_ordering(88) 00:13:27.100 fused_ordering(89) 00:13:27.100 fused_ordering(90) 00:13:27.100 fused_ordering(91) 00:13:27.100 fused_ordering(92) 00:13:27.100 fused_ordering(93) 00:13:27.100 fused_ordering(94) 00:13:27.100 fused_ordering(95) 00:13:27.100 fused_ordering(96) 00:13:27.100 fused_ordering(97) 00:13:27.100 fused_ordering(98) 00:13:27.100 fused_ordering(99) 00:13:27.100 fused_ordering(100) 00:13:27.100 fused_ordering(101) 00:13:27.100 fused_ordering(102) 00:13:27.100 fused_ordering(103) 00:13:27.100 fused_ordering(104) 00:13:27.100 fused_ordering(105) 00:13:27.100 fused_ordering(106) 00:13:27.100 fused_ordering(107) 00:13:27.100 fused_ordering(108) 00:13:27.100 fused_ordering(109) 00:13:27.100 fused_ordering(110) 00:13:27.100 fused_ordering(111) 00:13:27.100 fused_ordering(112) 00:13:27.100 fused_ordering(113) 00:13:27.100 fused_ordering(114) 00:13:27.100 fused_ordering(115) 00:13:27.100 fused_ordering(116) 00:13:27.100 fused_ordering(117) 00:13:27.100 fused_ordering(118) 00:13:27.100 fused_ordering(119) 00:13:27.100 fused_ordering(120) 00:13:27.100 fused_ordering(121) 00:13:27.100 fused_ordering(122) 00:13:27.100 fused_ordering(123) 00:13:27.100 fused_ordering(124) 00:13:27.100 fused_ordering(125) 00:13:27.100 fused_ordering(126) 00:13:27.100 fused_ordering(127) 00:13:27.100 fused_ordering(128) 00:13:27.100 fused_ordering(129) 00:13:27.100 fused_ordering(130) 00:13:27.100 fused_ordering(131) 00:13:27.100 fused_ordering(132) 00:13:27.100 fused_ordering(133) 00:13:27.100 fused_ordering(134) 00:13:27.100 fused_ordering(135) 00:13:27.100 fused_ordering(136) 00:13:27.100 fused_ordering(137) 00:13:27.100 fused_ordering(138) 00:13:27.100 fused_ordering(139) 00:13:27.100 fused_ordering(140) 00:13:27.100 fused_ordering(141) 00:13:27.100 fused_ordering(142) 00:13:27.100 fused_ordering(143) 00:13:27.100 fused_ordering(144) 00:13:27.100 fused_ordering(145) 00:13:27.100 fused_ordering(146) 00:13:27.100 fused_ordering(147) 00:13:27.100 fused_ordering(148) 00:13:27.100 fused_ordering(149) 00:13:27.100 fused_ordering(150) 00:13:27.100 fused_ordering(151) 00:13:27.100 fused_ordering(152) 00:13:27.100 fused_ordering(153) 00:13:27.100 fused_ordering(154) 00:13:27.100 fused_ordering(155) 00:13:27.100 fused_ordering(156) 00:13:27.100 fused_ordering(157) 00:13:27.100 fused_ordering(158) 00:13:27.100 fused_ordering(159) 00:13:27.100 fused_ordering(160) 00:13:27.100 fused_ordering(161) 00:13:27.100 fused_ordering(162) 00:13:27.100 fused_ordering(163) 00:13:27.100 fused_ordering(164) 00:13:27.100 fused_ordering(165) 00:13:27.100 fused_ordering(166) 00:13:27.100 fused_ordering(167) 00:13:27.100 fused_ordering(168) 00:13:27.100 fused_ordering(169) 00:13:27.100 fused_ordering(170) 00:13:27.100 fused_ordering(171) 00:13:27.100 fused_ordering(172) 00:13:27.100 fused_ordering(173) 00:13:27.100 fused_ordering(174) 00:13:27.100 fused_ordering(175) 00:13:27.100 fused_ordering(176) 00:13:27.100 fused_ordering(177) 00:13:27.100 fused_ordering(178) 00:13:27.100 fused_ordering(179) 00:13:27.100 fused_ordering(180) 00:13:27.100 fused_ordering(181) 00:13:27.100 fused_ordering(182) 00:13:27.100 fused_ordering(183) 00:13:27.100 fused_ordering(184) 00:13:27.100 fused_ordering(185) 00:13:27.100 fused_ordering(186) 00:13:27.100 fused_ordering(187) 00:13:27.100 fused_ordering(188) 00:13:27.100 fused_ordering(189) 00:13:27.100 fused_ordering(190) 00:13:27.100 fused_ordering(191) 00:13:27.100 fused_ordering(192) 00:13:27.100 fused_ordering(193) 00:13:27.100 fused_ordering(194) 00:13:27.100 fused_ordering(195) 00:13:27.100 fused_ordering(196) 00:13:27.100 fused_ordering(197) 00:13:27.100 fused_ordering(198) 00:13:27.100 fused_ordering(199) 00:13:27.100 fused_ordering(200) 00:13:27.100 fused_ordering(201) 00:13:27.100 fused_ordering(202) 00:13:27.100 fused_ordering(203) 00:13:27.100 fused_ordering(204) 00:13:27.100 fused_ordering(205) 00:13:27.358 fused_ordering(206) 00:13:27.358 fused_ordering(207) 00:13:27.358 fused_ordering(208) 00:13:27.358 fused_ordering(209) 00:13:27.358 fused_ordering(210) 00:13:27.358 fused_ordering(211) 00:13:27.358 fused_ordering(212) 00:13:27.358 fused_ordering(213) 00:13:27.358 fused_ordering(214) 00:13:27.358 fused_ordering(215) 00:13:27.358 fused_ordering(216) 00:13:27.358 fused_ordering(217) 00:13:27.358 fused_ordering(218) 00:13:27.358 fused_ordering(219) 00:13:27.358 fused_ordering(220) 00:13:27.358 fused_ordering(221) 00:13:27.358 fused_ordering(222) 00:13:27.358 fused_ordering(223) 00:13:27.358 fused_ordering(224) 00:13:27.358 fused_ordering(225) 00:13:27.358 fused_ordering(226) 00:13:27.358 fused_ordering(227) 00:13:27.358 fused_ordering(228) 00:13:27.358 fused_ordering(229) 00:13:27.358 fused_ordering(230) 00:13:27.358 fused_ordering(231) 00:13:27.358 fused_ordering(232) 00:13:27.358 fused_ordering(233) 00:13:27.358 fused_ordering(234) 00:13:27.358 fused_ordering(235) 00:13:27.358 fused_ordering(236) 00:13:27.358 fused_ordering(237) 00:13:27.358 fused_ordering(238) 00:13:27.358 fused_ordering(239) 00:13:27.358 fused_ordering(240) 00:13:27.358 fused_ordering(241) 00:13:27.358 fused_ordering(242) 00:13:27.358 fused_ordering(243) 00:13:27.358 fused_ordering(244) 00:13:27.358 fused_ordering(245) 00:13:27.358 fused_ordering(246) 00:13:27.358 fused_ordering(247) 00:13:27.358 fused_ordering(248) 00:13:27.358 fused_ordering(249) 00:13:27.358 fused_ordering(250) 00:13:27.358 fused_ordering(251) 00:13:27.358 fused_ordering(252) 00:13:27.358 fused_ordering(253) 00:13:27.358 fused_ordering(254) 00:13:27.358 fused_ordering(255) 00:13:27.358 fused_ordering(256) 00:13:27.358 fused_ordering(257) 00:13:27.358 fused_ordering(258) 00:13:27.358 fused_ordering(259) 00:13:27.358 fused_ordering(260) 00:13:27.358 fused_ordering(261) 00:13:27.358 fused_ordering(262) 00:13:27.358 fused_ordering(263) 00:13:27.358 fused_ordering(264) 00:13:27.358 fused_ordering(265) 00:13:27.358 fused_ordering(266) 00:13:27.358 fused_ordering(267) 00:13:27.358 fused_ordering(268) 00:13:27.358 fused_ordering(269) 00:13:27.358 fused_ordering(270) 00:13:27.358 fused_ordering(271) 00:13:27.358 fused_ordering(272) 00:13:27.358 fused_ordering(273) 00:13:27.358 fused_ordering(274) 00:13:27.358 fused_ordering(275) 00:13:27.358 fused_ordering(276) 00:13:27.358 fused_ordering(277) 00:13:27.358 fused_ordering(278) 00:13:27.358 fused_ordering(279) 00:13:27.358 fused_ordering(280) 00:13:27.358 fused_ordering(281) 00:13:27.358 fused_ordering(282) 00:13:27.358 fused_ordering(283) 00:13:27.358 fused_ordering(284) 00:13:27.358 fused_ordering(285) 00:13:27.358 fused_ordering(286) 00:13:27.358 fused_ordering(287) 00:13:27.358 fused_ordering(288) 00:13:27.358 fused_ordering(289) 00:13:27.358 fused_ordering(290) 00:13:27.358 fused_ordering(291) 00:13:27.358 fused_ordering(292) 00:13:27.358 fused_ordering(293) 00:13:27.358 fused_ordering(294) 00:13:27.358 fused_ordering(295) 00:13:27.358 fused_ordering(296) 00:13:27.358 fused_ordering(297) 00:13:27.358 fused_ordering(298) 00:13:27.358 fused_ordering(299) 00:13:27.358 fused_ordering(300) 00:13:27.358 fused_ordering(301) 00:13:27.358 fused_ordering(302) 00:13:27.358 fused_ordering(303) 00:13:27.358 fused_ordering(304) 00:13:27.358 fused_ordering(305) 00:13:27.358 fused_ordering(306) 00:13:27.358 fused_ordering(307) 00:13:27.358 fused_ordering(308) 00:13:27.358 fused_ordering(309) 00:13:27.358 fused_ordering(310) 00:13:27.358 fused_ordering(311) 00:13:27.358 fused_ordering(312) 00:13:27.358 fused_ordering(313) 00:13:27.358 fused_ordering(314) 00:13:27.358 fused_ordering(315) 00:13:27.358 fused_ordering(316) 00:13:27.358 fused_ordering(317) 00:13:27.358 fused_ordering(318) 00:13:27.358 fused_ordering(319) 00:13:27.358 fused_ordering(320) 00:13:27.358 fused_ordering(321) 00:13:27.358 fused_ordering(322) 00:13:27.358 fused_ordering(323) 00:13:27.358 fused_ordering(324) 00:13:27.358 fused_ordering(325) 00:13:27.358 fused_ordering(326) 00:13:27.358 fused_ordering(327) 00:13:27.358 fused_ordering(328) 00:13:27.358 fused_ordering(329) 00:13:27.358 fused_ordering(330) 00:13:27.358 fused_ordering(331) 00:13:27.358 fused_ordering(332) 00:13:27.358 fused_ordering(333) 00:13:27.358 fused_ordering(334) 00:13:27.358 fused_ordering(335) 00:13:27.358 fused_ordering(336) 00:13:27.358 fused_ordering(337) 00:13:27.358 fused_ordering(338) 00:13:27.358 fused_ordering(339) 00:13:27.358 fused_ordering(340) 00:13:27.358 fused_ordering(341) 00:13:27.358 fused_ordering(342) 00:13:27.358 fused_ordering(343) 00:13:27.358 fused_ordering(344) 00:13:27.359 fused_ordering(345) 00:13:27.359 fused_ordering(346) 00:13:27.359 fused_ordering(347) 00:13:27.359 fused_ordering(348) 00:13:27.359 fused_ordering(349) 00:13:27.359 fused_ordering(350) 00:13:27.359 fused_ordering(351) 00:13:27.359 fused_ordering(352) 00:13:27.359 fused_ordering(353) 00:13:27.359 fused_ordering(354) 00:13:27.359 fused_ordering(355) 00:13:27.359 fused_ordering(356) 00:13:27.359 fused_ordering(357) 00:13:27.359 fused_ordering(358) 00:13:27.359 fused_ordering(359) 00:13:27.359 fused_ordering(360) 00:13:27.359 fused_ordering(361) 00:13:27.359 fused_ordering(362) 00:13:27.359 fused_ordering(363) 00:13:27.359 fused_ordering(364) 00:13:27.359 fused_ordering(365) 00:13:27.359 fused_ordering(366) 00:13:27.359 fused_ordering(367) 00:13:27.359 fused_ordering(368) 00:13:27.359 fused_ordering(369) 00:13:27.359 fused_ordering(370) 00:13:27.359 fused_ordering(371) 00:13:27.359 fused_ordering(372) 00:13:27.359 fused_ordering(373) 00:13:27.359 fused_ordering(374) 00:13:27.359 fused_ordering(375) 00:13:27.359 fused_ordering(376) 00:13:27.359 fused_ordering(377) 00:13:27.359 fused_ordering(378) 00:13:27.359 fused_ordering(379) 00:13:27.359 fused_ordering(380) 00:13:27.359 fused_ordering(381) 00:13:27.359 fused_ordering(382) 00:13:27.359 fused_ordering(383) 00:13:27.359 fused_ordering(384) 00:13:27.359 fused_ordering(385) 00:13:27.359 fused_ordering(386) 00:13:27.359 fused_ordering(387) 00:13:27.359 fused_ordering(388) 00:13:27.359 fused_ordering(389) 00:13:27.359 fused_ordering(390) 00:13:27.359 fused_ordering(391) 00:13:27.359 fused_ordering(392) 00:13:27.359 fused_ordering(393) 00:13:27.359 fused_ordering(394) 00:13:27.359 fused_ordering(395) 00:13:27.359 fused_ordering(396) 00:13:27.359 fused_ordering(397) 00:13:27.359 fused_ordering(398) 00:13:27.359 fused_ordering(399) 00:13:27.359 fused_ordering(400) 00:13:27.359 fused_ordering(401) 00:13:27.359 fused_ordering(402) 00:13:27.359 fused_ordering(403) 00:13:27.359 fused_ordering(404) 00:13:27.359 fused_ordering(405) 00:13:27.359 fused_ordering(406) 00:13:27.359 fused_ordering(407) 00:13:27.359 fused_ordering(408) 00:13:27.359 fused_ordering(409) 00:13:27.359 fused_ordering(410) 00:13:27.617 fused_ordering(411) 00:13:27.617 fused_ordering(412) 00:13:27.617 fused_ordering(413) 00:13:27.617 fused_ordering(414) 00:13:27.617 fused_ordering(415) 00:13:27.617 fused_ordering(416) 00:13:27.617 fused_ordering(417) 00:13:27.617 fused_ordering(418) 00:13:27.617 fused_ordering(419) 00:13:27.617 fused_ordering(420) 00:13:27.617 fused_ordering(421) 00:13:27.617 fused_ordering(422) 00:13:27.617 fused_ordering(423) 00:13:27.617 fused_ordering(424) 00:13:27.617 fused_ordering(425) 00:13:27.617 fused_ordering(426) 00:13:27.617 fused_ordering(427) 00:13:27.617 fused_ordering(428) 00:13:27.617 fused_ordering(429) 00:13:27.617 fused_ordering(430) 00:13:27.617 fused_ordering(431) 00:13:27.617 fused_ordering(432) 00:13:27.617 fused_ordering(433) 00:13:27.617 fused_ordering(434) 00:13:27.617 fused_ordering(435) 00:13:27.617 fused_ordering(436) 00:13:27.617 fused_ordering(437) 00:13:27.617 fused_ordering(438) 00:13:27.617 fused_ordering(439) 00:13:27.617 fused_ordering(440) 00:13:27.617 fused_ordering(441) 00:13:27.617 fused_ordering(442) 00:13:27.617 fused_ordering(443) 00:13:27.617 fused_ordering(444) 00:13:27.617 fused_ordering(445) 00:13:27.617 fused_ordering(446) 00:13:27.617 fused_ordering(447) 00:13:27.617 fused_ordering(448) 00:13:27.617 fused_ordering(449) 00:13:27.617 fused_ordering(450) 00:13:27.617 fused_ordering(451) 00:13:27.617 fused_ordering(452) 00:13:27.617 fused_ordering(453) 00:13:27.617 fused_ordering(454) 00:13:27.617 fused_ordering(455) 00:13:27.617 fused_ordering(456) 00:13:27.617 fused_ordering(457) 00:13:27.617 fused_ordering(458) 00:13:27.617 fused_ordering(459) 00:13:27.617 fused_ordering(460) 00:13:27.617 fused_ordering(461) 00:13:27.617 fused_ordering(462) 00:13:27.617 fused_ordering(463) 00:13:27.617 fused_ordering(464) 00:13:27.617 fused_ordering(465) 00:13:27.617 fused_ordering(466) 00:13:27.617 fused_ordering(467) 00:13:27.617 fused_ordering(468) 00:13:27.617 fused_ordering(469) 00:13:27.617 fused_ordering(470) 00:13:27.617 fused_ordering(471) 00:13:27.617 fused_ordering(472) 00:13:27.617 fused_ordering(473) 00:13:27.617 fused_ordering(474) 00:13:27.617 fused_ordering(475) 00:13:27.617 fused_ordering(476) 00:13:27.617 fused_ordering(477) 00:13:27.617 fused_ordering(478) 00:13:27.617 fused_ordering(479) 00:13:27.617 fused_ordering(480) 00:13:27.617 fused_ordering(481) 00:13:27.617 fused_ordering(482) 00:13:27.617 fused_ordering(483) 00:13:27.617 fused_ordering(484) 00:13:27.617 fused_ordering(485) 00:13:27.617 fused_ordering(486) 00:13:27.617 fused_ordering(487) 00:13:27.617 fused_ordering(488) 00:13:27.617 fused_ordering(489) 00:13:27.617 fused_ordering(490) 00:13:27.617 fused_ordering(491) 00:13:27.617 fused_ordering(492) 00:13:27.617 fused_ordering(493) 00:13:27.617 fused_ordering(494) 00:13:27.617 fused_ordering(495) 00:13:27.617 fused_ordering(496) 00:13:27.617 fused_ordering(497) 00:13:27.617 fused_ordering(498) 00:13:27.617 fused_ordering(499) 00:13:27.617 fused_ordering(500) 00:13:27.617 fused_ordering(501) 00:13:27.617 fused_ordering(502) 00:13:27.617 fused_ordering(503) 00:13:27.617 fused_ordering(504) 00:13:27.617 fused_ordering(505) 00:13:27.617 fused_ordering(506) 00:13:27.617 fused_ordering(507) 00:13:27.617 fused_ordering(508) 00:13:27.617 fused_ordering(509) 00:13:27.617 fused_ordering(510) 00:13:27.617 fused_ordering(511) 00:13:27.617 fused_ordering(512) 00:13:27.617 fused_ordering(513) 00:13:27.617 fused_ordering(514) 00:13:27.617 fused_ordering(515) 00:13:27.617 fused_ordering(516) 00:13:27.617 fused_ordering(517) 00:13:27.617 fused_ordering(518) 00:13:27.617 fused_ordering(519) 00:13:27.617 fused_ordering(520) 00:13:27.617 fused_ordering(521) 00:13:27.617 fused_ordering(522) 00:13:27.617 fused_ordering(523) 00:13:27.617 fused_ordering(524) 00:13:27.617 fused_ordering(525) 00:13:27.617 fused_ordering(526) 00:13:27.617 fused_ordering(527) 00:13:27.617 fused_ordering(528) 00:13:27.617 fused_ordering(529) 00:13:27.617 fused_ordering(530) 00:13:27.617 fused_ordering(531) 00:13:27.617 fused_ordering(532) 00:13:27.617 fused_ordering(533) 00:13:27.617 fused_ordering(534) 00:13:27.617 fused_ordering(535) 00:13:27.617 fused_ordering(536) 00:13:27.617 fused_ordering(537) 00:13:27.617 fused_ordering(538) 00:13:27.617 fused_ordering(539) 00:13:27.617 fused_ordering(540) 00:13:27.617 fused_ordering(541) 00:13:27.617 fused_ordering(542) 00:13:27.617 fused_ordering(543) 00:13:27.617 fused_ordering(544) 00:13:27.617 fused_ordering(545) 00:13:27.617 fused_ordering(546) 00:13:27.617 fused_ordering(547) 00:13:27.617 fused_ordering(548) 00:13:27.617 fused_ordering(549) 00:13:27.617 fused_ordering(550) 00:13:27.617 fused_ordering(551) 00:13:27.617 fused_ordering(552) 00:13:27.617 fused_ordering(553) 00:13:27.617 fused_ordering(554) 00:13:27.617 fused_ordering(555) 00:13:27.617 fused_ordering(556) 00:13:27.617 fused_ordering(557) 00:13:27.617 fused_ordering(558) 00:13:27.617 fused_ordering(559) 00:13:27.617 fused_ordering(560) 00:13:27.617 fused_ordering(561) 00:13:27.617 fused_ordering(562) 00:13:27.617 fused_ordering(563) 00:13:27.617 fused_ordering(564) 00:13:27.617 fused_ordering(565) 00:13:27.617 fused_ordering(566) 00:13:27.617 fused_ordering(567) 00:13:27.617 fused_ordering(568) 00:13:27.617 fused_ordering(569) 00:13:27.617 fused_ordering(570) 00:13:27.617 fused_ordering(571) 00:13:27.617 fused_ordering(572) 00:13:27.617 fused_ordering(573) 00:13:27.617 fused_ordering(574) 00:13:27.617 fused_ordering(575) 00:13:27.617 fused_ordering(576) 00:13:27.617 fused_ordering(577) 00:13:27.617 fused_ordering(578) 00:13:27.617 fused_ordering(579) 00:13:27.617 fused_ordering(580) 00:13:27.617 fused_ordering(581) 00:13:27.617 fused_ordering(582) 00:13:27.617 fused_ordering(583) 00:13:27.618 fused_ordering(584) 00:13:27.618 fused_ordering(585) 00:13:27.618 fused_ordering(586) 00:13:27.618 fused_ordering(587) 00:13:27.618 fused_ordering(588) 00:13:27.618 fused_ordering(589) 00:13:27.618 fused_ordering(590) 00:13:27.618 fused_ordering(591) 00:13:27.618 fused_ordering(592) 00:13:27.618 fused_ordering(593) 00:13:27.618 fused_ordering(594) 00:13:27.618 fused_ordering(595) 00:13:27.618 fused_ordering(596) 00:13:27.618 fused_ordering(597) 00:13:27.618 fused_ordering(598) 00:13:27.618 fused_ordering(599) 00:13:27.618 fused_ordering(600) 00:13:27.618 fused_ordering(601) 00:13:27.618 fused_ordering(602) 00:13:27.618 fused_ordering(603) 00:13:27.618 fused_ordering(604) 00:13:27.618 fused_ordering(605) 00:13:27.618 fused_ordering(606) 00:13:27.618 fused_ordering(607) 00:13:27.618 fused_ordering(608) 00:13:27.618 fused_ordering(609) 00:13:27.618 fused_ordering(610) 00:13:27.618 fused_ordering(611) 00:13:27.618 fused_ordering(612) 00:13:27.618 fused_ordering(613) 00:13:27.618 fused_ordering(614) 00:13:27.618 fused_ordering(615) 00:13:28.183 fused_ordering(616) 00:13:28.183 fused_ordering(617) 00:13:28.183 fused_ordering(618) 00:13:28.183 fused_ordering(619) 00:13:28.183 fused_ordering(620) 00:13:28.183 fused_ordering(621) 00:13:28.183 fused_ordering(622) 00:13:28.183 fused_ordering(623) 00:13:28.183 fused_ordering(624) 00:13:28.183 fused_ordering(625) 00:13:28.183 fused_ordering(626) 00:13:28.183 fused_ordering(627) 00:13:28.183 fused_ordering(628) 00:13:28.183 fused_ordering(629) 00:13:28.183 fused_ordering(630) 00:13:28.183 fused_ordering(631) 00:13:28.183 fused_ordering(632) 00:13:28.183 fused_ordering(633) 00:13:28.183 fused_ordering(634) 00:13:28.183 fused_ordering(635) 00:13:28.183 fused_ordering(636) 00:13:28.183 fused_ordering(637) 00:13:28.183 fused_ordering(638) 00:13:28.183 fused_ordering(639) 00:13:28.183 fused_ordering(640) 00:13:28.183 fused_ordering(641) 00:13:28.183 fused_ordering(642) 00:13:28.183 fused_ordering(643) 00:13:28.183 fused_ordering(644) 00:13:28.183 fused_ordering(645) 00:13:28.183 fused_ordering(646) 00:13:28.183 fused_ordering(647) 00:13:28.183 fused_ordering(648) 00:13:28.183 fused_ordering(649) 00:13:28.183 fused_ordering(650) 00:13:28.183 fused_ordering(651) 00:13:28.183 fused_ordering(652) 00:13:28.183 fused_ordering(653) 00:13:28.183 fused_ordering(654) 00:13:28.183 fused_ordering(655) 00:13:28.183 fused_ordering(656) 00:13:28.183 fused_ordering(657) 00:13:28.183 fused_ordering(658) 00:13:28.184 fused_ordering(659) 00:13:28.184 fused_ordering(660) 00:13:28.184 fused_ordering(661) 00:13:28.184 fused_ordering(662) 00:13:28.184 fused_ordering(663) 00:13:28.184 fused_ordering(664) 00:13:28.184 fused_ordering(665) 00:13:28.184 fused_ordering(666) 00:13:28.184 fused_ordering(667) 00:13:28.184 fused_ordering(668) 00:13:28.184 fused_ordering(669) 00:13:28.184 fused_ordering(670) 00:13:28.184 fused_ordering(671) 00:13:28.184 fused_ordering(672) 00:13:28.184 fused_ordering(673) 00:13:28.184 fused_ordering(674) 00:13:28.184 fused_ordering(675) 00:13:28.184 fused_ordering(676) 00:13:28.184 fused_ordering(677) 00:13:28.184 fused_ordering(678) 00:13:28.184 fused_ordering(679) 00:13:28.184 fused_ordering(680) 00:13:28.184 fused_ordering(681) 00:13:28.184 fused_ordering(682) 00:13:28.184 fused_ordering(683) 00:13:28.184 fused_ordering(684) 00:13:28.184 fused_ordering(685) 00:13:28.184 fused_ordering(686) 00:13:28.184 fused_ordering(687) 00:13:28.184 fused_ordering(688) 00:13:28.184 fused_ordering(689) 00:13:28.184 fused_ordering(690) 00:13:28.184 fused_ordering(691) 00:13:28.184 fused_ordering(692) 00:13:28.184 fused_ordering(693) 00:13:28.184 fused_ordering(694) 00:13:28.184 fused_ordering(695) 00:13:28.184 fused_ordering(696) 00:13:28.184 fused_ordering(697) 00:13:28.184 fused_ordering(698) 00:13:28.184 fused_ordering(699) 00:13:28.184 fused_ordering(700) 00:13:28.184 fused_ordering(701) 00:13:28.184 fused_ordering(702) 00:13:28.184 fused_ordering(703) 00:13:28.184 fused_ordering(704) 00:13:28.184 fused_ordering(705) 00:13:28.184 fused_ordering(706) 00:13:28.184 fused_ordering(707) 00:13:28.184 fused_ordering(708) 00:13:28.184 fused_ordering(709) 00:13:28.184 fused_ordering(710) 00:13:28.184 fused_ordering(711) 00:13:28.184 fused_ordering(712) 00:13:28.184 fused_ordering(713) 00:13:28.184 fused_ordering(714) 00:13:28.184 fused_ordering(715) 00:13:28.184 fused_ordering(716) 00:13:28.184 fused_ordering(717) 00:13:28.184 fused_ordering(718) 00:13:28.184 fused_ordering(719) 00:13:28.184 fused_ordering(720) 00:13:28.184 fused_ordering(721) 00:13:28.184 fused_ordering(722) 00:13:28.184 fused_ordering(723) 00:13:28.184 fused_ordering(724) 00:13:28.184 fused_ordering(725) 00:13:28.184 fused_ordering(726) 00:13:28.184 fused_ordering(727) 00:13:28.184 fused_ordering(728) 00:13:28.184 fused_ordering(729) 00:13:28.184 fused_ordering(730) 00:13:28.184 fused_ordering(731) 00:13:28.184 fused_ordering(732) 00:13:28.184 fused_ordering(733) 00:13:28.184 fused_ordering(734) 00:13:28.184 fused_ordering(735) 00:13:28.184 fused_ordering(736) 00:13:28.184 fused_ordering(737) 00:13:28.184 fused_ordering(738) 00:13:28.184 fused_ordering(739) 00:13:28.184 fused_ordering(740) 00:13:28.184 fused_ordering(741) 00:13:28.184 fused_ordering(742) 00:13:28.184 fused_ordering(743) 00:13:28.184 fused_ordering(744) 00:13:28.184 fused_ordering(745) 00:13:28.184 fused_ordering(746) 00:13:28.184 fused_ordering(747) 00:13:28.184 fused_ordering(748) 00:13:28.184 fused_ordering(749) 00:13:28.184 fused_ordering(750) 00:13:28.184 fused_ordering(751) 00:13:28.184 fused_ordering(752) 00:13:28.184 fused_ordering(753) 00:13:28.184 fused_ordering(754) 00:13:28.184 fused_ordering(755) 00:13:28.184 fused_ordering(756) 00:13:28.184 fused_ordering(757) 00:13:28.184 fused_ordering(758) 00:13:28.184 fused_ordering(759) 00:13:28.184 fused_ordering(760) 00:13:28.184 fused_ordering(761) 00:13:28.184 fused_ordering(762) 00:13:28.184 fused_ordering(763) 00:13:28.184 fused_ordering(764) 00:13:28.184 fused_ordering(765) 00:13:28.184 fused_ordering(766) 00:13:28.184 fused_ordering(767) 00:13:28.184 fused_ordering(768) 00:13:28.184 fused_ordering(769) 00:13:28.184 fused_ordering(770) 00:13:28.184 fused_ordering(771) 00:13:28.184 fused_ordering(772) 00:13:28.184 fused_ordering(773) 00:13:28.184 fused_ordering(774) 00:13:28.184 fused_ordering(775) 00:13:28.184 fused_ordering(776) 00:13:28.184 fused_ordering(777) 00:13:28.184 fused_ordering(778) 00:13:28.184 fused_ordering(779) 00:13:28.184 fused_ordering(780) 00:13:28.184 fused_ordering(781) 00:13:28.184 fused_ordering(782) 00:13:28.184 fused_ordering(783) 00:13:28.184 fused_ordering(784) 00:13:28.184 fused_ordering(785) 00:13:28.184 fused_ordering(786) 00:13:28.184 fused_ordering(787) 00:13:28.184 fused_ordering(788) 00:13:28.184 fused_ordering(789) 00:13:28.184 fused_ordering(790) 00:13:28.184 fused_ordering(791) 00:13:28.184 fused_ordering(792) 00:13:28.184 fused_ordering(793) 00:13:28.184 fused_ordering(794) 00:13:28.184 fused_ordering(795) 00:13:28.184 fused_ordering(796) 00:13:28.184 fused_ordering(797) 00:13:28.184 fused_ordering(798) 00:13:28.184 fused_ordering(799) 00:13:28.184 fused_ordering(800) 00:13:28.184 fused_ordering(801) 00:13:28.184 fused_ordering(802) 00:13:28.184 fused_ordering(803) 00:13:28.184 fused_ordering(804) 00:13:28.184 fused_ordering(805) 00:13:28.184 fused_ordering(806) 00:13:28.184 fused_ordering(807) 00:13:28.184 fused_ordering(808) 00:13:28.184 fused_ordering(809) 00:13:28.184 fused_ordering(810) 00:13:28.184 fused_ordering(811) 00:13:28.184 fused_ordering(812) 00:13:28.184 fused_ordering(813) 00:13:28.184 fused_ordering(814) 00:13:28.184 fused_ordering(815) 00:13:28.184 fused_ordering(816) 00:13:28.184 fused_ordering(817) 00:13:28.184 fused_ordering(818) 00:13:28.184 fused_ordering(819) 00:13:28.184 fused_ordering(820) 00:13:28.442 fused_ordering(821) 00:13:28.442 fused_ordering(822) 00:13:28.442 fused_ordering(823) 00:13:28.442 fused_ordering(824) 00:13:28.442 fused_ordering(825) 00:13:28.442 fused_ordering(826) 00:13:28.442 fused_ordering(827) 00:13:28.442 fused_ordering(828) 00:13:28.442 fused_ordering(829) 00:13:28.442 fused_ordering(830) 00:13:28.442 fused_ordering(831) 00:13:28.442 fused_ordering(832) 00:13:28.442 fused_ordering(833) 00:13:28.442 fused_ordering(834) 00:13:28.442 fused_ordering(835) 00:13:28.442 fused_ordering(836) 00:13:28.442 fused_ordering(837) 00:13:28.442 fused_ordering(838) 00:13:28.442 fused_ordering(839) 00:13:28.442 fused_ordering(840) 00:13:28.442 fused_ordering(841) 00:13:28.442 fused_ordering(842) 00:13:28.442 fused_ordering(843) 00:13:28.442 fused_ordering(844) 00:13:28.442 fused_ordering(845) 00:13:28.442 fused_ordering(846) 00:13:28.442 fused_ordering(847) 00:13:28.443 fused_ordering(848) 00:13:28.443 fused_ordering(849) 00:13:28.443 fused_ordering(850) 00:13:28.443 fused_ordering(851) 00:13:28.443 fused_ordering(852) 00:13:28.443 fused_ordering(853) 00:13:28.443 fused_ordering(854) 00:13:28.443 fused_ordering(855) 00:13:28.443 fused_ordering(856) 00:13:28.443 fused_ordering(857) 00:13:28.443 fused_ordering(858) 00:13:28.443 fused_ordering(859) 00:13:28.443 fused_ordering(860) 00:13:28.443 fused_ordering(861) 00:13:28.443 fused_ordering(862) 00:13:28.443 fused_ordering(863) 00:13:28.443 fused_ordering(864) 00:13:28.443 fused_ordering(865) 00:13:28.443 fused_ordering(866) 00:13:28.443 fused_ordering(867) 00:13:28.443 fused_ordering(868) 00:13:28.443 fused_ordering(869) 00:13:28.443 fused_ordering(870) 00:13:28.443 fused_ordering(871) 00:13:28.443 fused_ordering(872) 00:13:28.443 fused_ordering(873) 00:13:28.443 fused_ordering(874) 00:13:28.443 fused_ordering(875) 00:13:28.443 fused_ordering(876) 00:13:28.443 fused_ordering(877) 00:13:28.443 fused_ordering(878) 00:13:28.443 fused_ordering(879) 00:13:28.443 fused_ordering(880) 00:13:28.443 fused_ordering(881) 00:13:28.443 fused_ordering(882) 00:13:28.443 fused_ordering(883) 00:13:28.443 fused_ordering(884) 00:13:28.443 fused_ordering(885) 00:13:28.443 fused_ordering(886) 00:13:28.443 fused_ordering(887) 00:13:28.443 fused_ordering(888) 00:13:28.443 fused_ordering(889) 00:13:28.443 fused_ordering(890) 00:13:28.443 fused_ordering(891) 00:13:28.443 fused_ordering(892) 00:13:28.443 fused_ordering(893) 00:13:28.443 fused_ordering(894) 00:13:28.443 fused_ordering(895) 00:13:28.443 fused_ordering(896) 00:13:28.443 fused_ordering(897) 00:13:28.443 fused_ordering(898) 00:13:28.443 fused_ordering(899) 00:13:28.443 fused_ordering(900) 00:13:28.443 fused_ordering(901) 00:13:28.443 fused_ordering(902) 00:13:28.443 fused_ordering(903) 00:13:28.443 fused_ordering(904) 00:13:28.443 fused_ordering(905) 00:13:28.443 fused_ordering(906) 00:13:28.443 fused_ordering(907) 00:13:28.443 fused_ordering(908) 00:13:28.443 fused_ordering(909) 00:13:28.443 fused_ordering(910) 00:13:28.443 fused_ordering(911) 00:13:28.443 fused_ordering(912) 00:13:28.443 fused_ordering(913) 00:13:28.443 fused_ordering(914) 00:13:28.443 fused_ordering(915) 00:13:28.443 fused_ordering(916) 00:13:28.443 fused_ordering(917) 00:13:28.443 fused_ordering(918) 00:13:28.443 fused_ordering(919) 00:13:28.443 fused_ordering(920) 00:13:28.443 fused_ordering(921) 00:13:28.443 fused_ordering(922) 00:13:28.443 fused_ordering(923) 00:13:28.443 fused_ordering(924) 00:13:28.443 fused_ordering(925) 00:13:28.443 fused_ordering(926) 00:13:28.443 fused_ordering(927) 00:13:28.443 fused_ordering(928) 00:13:28.443 fused_ordering(929) 00:13:28.443 fused_ordering(930) 00:13:28.443 fused_ordering(931) 00:13:28.443 fused_ordering(932) 00:13:28.443 fused_ordering(933) 00:13:28.443 fused_ordering(934) 00:13:28.443 fused_ordering(935) 00:13:28.443 fused_ordering(936) 00:13:28.443 fused_ordering(937) 00:13:28.443 fused_ordering(938) 00:13:28.443 fused_ordering(939) 00:13:28.443 fused_ordering(940) 00:13:28.443 fused_ordering(941) 00:13:28.443 fused_ordering(942) 00:13:28.443 fused_ordering(943) 00:13:28.443 fused_ordering(944) 00:13:28.443 fused_ordering(945) 00:13:28.443 fused_ordering(946) 00:13:28.443 fused_ordering(947) 00:13:28.443 fused_ordering(948) 00:13:28.443 fused_ordering(949) 00:13:28.443 fused_ordering(950) 00:13:28.443 fused_ordering(951) 00:13:28.443 fused_ordering(952) 00:13:28.443 fused_ordering(953) 00:13:28.443 fused_ordering(954) 00:13:28.443 fused_ordering(955) 00:13:28.443 fused_ordering(956) 00:13:28.443 fused_ordering(957) 00:13:28.443 fused_ordering(958) 00:13:28.443 fused_ordering(959) 00:13:28.443 fused_ordering(960) 00:13:28.443 fused_ordering(961) 00:13:28.443 fused_ordering(962) 00:13:28.443 fused_ordering(963) 00:13:28.443 fused_ordering(964) 00:13:28.443 fused_ordering(965) 00:13:28.443 fused_ordering(966) 00:13:28.443 fused_ordering(967) 00:13:28.443 fused_ordering(968) 00:13:28.443 fused_ordering(969) 00:13:28.443 fused_ordering(970) 00:13:28.443 fused_ordering(971) 00:13:28.443 fused_ordering(972) 00:13:28.443 fused_ordering(973) 00:13:28.443 fused_ordering(974) 00:13:28.443 fused_ordering(975) 00:13:28.443 fused_ordering(976) 00:13:28.443 fused_ordering(977) 00:13:28.443 fused_ordering(978) 00:13:28.443 fused_ordering(979) 00:13:28.443 fused_ordering(980) 00:13:28.443 fused_ordering(981) 00:13:28.443 fused_ordering(982) 00:13:28.443 fused_ordering(983) 00:13:28.443 fused_ordering(984) 00:13:28.443 fused_ordering(985) 00:13:28.443 fused_ordering(986) 00:13:28.443 fused_ordering(987) 00:13:28.443 fused_ordering(988) 00:13:28.443 fused_ordering(989) 00:13:28.443 fused_ordering(990) 00:13:28.443 fused_ordering(991) 00:13:28.443 fused_ordering(992) 00:13:28.443 fused_ordering(993) 00:13:28.443 fused_ordering(994) 00:13:28.443 fused_ordering(995) 00:13:28.443 fused_ordering(996) 00:13:28.443 fused_ordering(997) 00:13:28.443 fused_ordering(998) 00:13:28.443 fused_ordering(999) 00:13:28.443 fused_ordering(1000) 00:13:28.443 fused_ordering(1001) 00:13:28.443 fused_ordering(1002) 00:13:28.443 fused_ordering(1003) 00:13:28.443 fused_ordering(1004) 00:13:28.443 fused_ordering(1005) 00:13:28.443 fused_ordering(1006) 00:13:28.443 fused_ordering(1007) 00:13:28.443 fused_ordering(1008) 00:13:28.443 fused_ordering(1009) 00:13:28.443 fused_ordering(1010) 00:13:28.443 fused_ordering(1011) 00:13:28.443 fused_ordering(1012) 00:13:28.443 fused_ordering(1013) 00:13:28.443 fused_ordering(1014) 00:13:28.443 fused_ordering(1015) 00:13:28.443 fused_ordering(1016) 00:13:28.443 fused_ordering(1017) 00:13:28.443 fused_ordering(1018) 00:13:28.443 fused_ordering(1019) 00:13:28.443 fused_ordering(1020) 00:13:28.443 fused_ordering(1021) 00:13:28.443 fused_ordering(1022) 00:13:28.443 fused_ordering(1023) 00:13:28.443 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:28.443 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:28.443 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:28.443 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:28.443 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:28.443 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:28.443 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:28.443 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:28.443 rmmod nvme_tcp 00:13:28.443 rmmod nvme_fabrics 00:13:28.443 rmmod nvme_keyring 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 28110 ']' 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 28110 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 28110 ']' 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 28110 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 28110 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 28110' 00:13:28.702 killing process with pid 28110 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 28110 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 28110 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.702 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:31.237 00:13:31.237 real 0m11.559s 00:13:31.237 user 0m5.353s 00:13:31.237 sys 0m6.464s 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:31.237 ************************************ 00:13:31.237 END TEST nvmf_fused_ordering 00:13:31.237 ************************************ 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:31.237 ************************************ 00:13:31.237 START TEST nvmf_ns_masking 00:13:31.237 ************************************ 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:31.237 * Looking for test storage... 00:13:31.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:31.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.237 --rc genhtml_branch_coverage=1 00:13:31.237 --rc genhtml_function_coverage=1 00:13:31.237 --rc genhtml_legend=1 00:13:31.237 --rc geninfo_all_blocks=1 00:13:31.237 --rc geninfo_unexecuted_blocks=1 00:13:31.237 00:13:31.237 ' 00:13:31.237 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:31.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.237 --rc genhtml_branch_coverage=1 00:13:31.237 --rc genhtml_function_coverage=1 00:13:31.237 --rc genhtml_legend=1 00:13:31.237 --rc geninfo_all_blocks=1 00:13:31.237 --rc geninfo_unexecuted_blocks=1 00:13:31.237 00:13:31.238 ' 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:31.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.238 --rc genhtml_branch_coverage=1 00:13:31.238 --rc genhtml_function_coverage=1 00:13:31.238 --rc genhtml_legend=1 00:13:31.238 --rc geninfo_all_blocks=1 00:13:31.238 --rc geninfo_unexecuted_blocks=1 00:13:31.238 00:13:31.238 ' 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:31.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.238 --rc genhtml_branch_coverage=1 00:13:31.238 --rc genhtml_function_coverage=1 00:13:31.238 --rc genhtml_legend=1 00:13:31.238 --rc geninfo_all_blocks=1 00:13:31.238 --rc geninfo_unexecuted_blocks=1 00:13:31.238 00:13:31.238 ' 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:31.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=9986d14f-f9a3-41c1-a86a-0de626cdd643 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=5d315389-180f-43d1-a8a5-8a09cedfaf9f 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0f7c90cd-b038-4055-abdb-1a8aed688554 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:31.238 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:37.809 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:37.810 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:37.810 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:37.810 Found net devices under 0000:af:00.0: cvl_0_0 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:37.810 Found net devices under 0000:af:00.1: cvl_0_1 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:37.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:13:37.810 00:13:37.810 --- 10.0.0.2 ping statistics --- 00:13:37.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.810 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:13:37.810 00:13:37.810 --- 10.0.0.1 ping statistics --- 00:13:37.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.810 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:37.810 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:38.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=32550 00:13:38.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 32550 00:13:38.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:38.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 32550 ']' 00:13:38.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.070 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:38.070 [2024-12-11 09:51:47.436104] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:13:38.070 [2024-12-11 09:51:47.436155] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.070 [2024-12-11 09:51:47.520277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.070 [2024-12-11 09:51:47.557549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.070 [2024-12-11 09:51:47.557582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.070 [2024-12-11 09:51:47.557588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.070 [2024-12-11 09:51:47.557594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.070 [2024-12-11 09:51:47.557598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.070 [2024-12-11 09:51:47.558132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.006 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.006 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:39.006 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:39.006 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:39.006 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:39.006 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.006 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:39.006 [2024-12-11 09:51:48.472652] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.006 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:39.006 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:39.006 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:39.265 Malloc1 00:13:39.265 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:39.524 Malloc2 00:13:39.524 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:39.782 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:39.782 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.042 [2024-12-11 09:51:49.483557] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.042 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:40.042 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0f7c90cd-b038-4055-abdb-1a8aed688554 -a 10.0.0.2 -s 4420 -i 4 00:13:40.300 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:40.300 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:40.300 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.300 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:40.300 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:42.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:42.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:42.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:42.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:42.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:42.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:42.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:42.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:42.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:42.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:42.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:42.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:42.204 [ 0]:0x1 00:13:42.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:42.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.463 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f61423b59774cec9f73c664d765b426 00:13:42.463 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f61423b59774cec9f73c664d765b426 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.463 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:42.463 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:42.463 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.463 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:42.463 [ 0]:0x1 00:13:42.463 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:42.463 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.721 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f61423b59774cec9f73c664d765b426 00:13:42.721 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f61423b59774cec9f73c664d765b426 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.721 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:42.721 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.721 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:42.721 [ 1]:0x2 00:13:42.721 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.721 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:42.721 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c864d408462146bcb6e4f34c274e3087 00:13:42.721 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c864d408462146bcb6e4f34c274e3087 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.721 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:42.721 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:42.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.980 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.980 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:43.238 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:43.238 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0f7c90cd-b038-4055-abdb-1a8aed688554 -a 10.0.0.2 -s 4420 -i 4 00:13:43.497 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:43.497 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:43.497 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:43.497 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:43.497 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:43.497 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.400 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:45.663 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:45.663 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:45.663 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:45.663 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.663 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:45.663 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:45.663 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:45.663 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:45.663 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:45.663 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.663 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:45.663 [ 0]:0x2 00:13:45.663 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:45.663 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:45.663 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c864d408462146bcb6e4f34c274e3087 00:13:45.663 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c864d408462146bcb6e4f34c274e3087 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.663 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:45.926 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:45.926 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.926 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:45.926 [ 0]:0x1 00:13:45.926 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:45.926 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:45.926 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f61423b59774cec9f73c664d765b426 00:13:45.926 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f61423b59774cec9f73c664d765b426 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.926 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:45.926 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.926 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:45.926 [ 1]:0x2 00:13:45.926 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:45.926 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.184 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c864d408462146bcb6e4f34c274e3087 00:13:46.184 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c864d408462146bcb6e4f34c274e3087 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.184 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:46.184 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:46.184 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:46.184 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:46.184 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:46.184 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.184 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:46.184 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.184 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:46.184 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.184 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:46.184 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:46.184 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.443 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:46.443 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.443 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:46.443 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:46.443 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:46.443 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:46.443 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:46.443 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.443 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:46.443 [ 0]:0x2 00:13:46.443 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:46.443 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.443 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c864d408462146bcb6e4f34c274e3087 00:13:46.443 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c864d408462146bcb6e4f34c274e3087 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.443 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:46.443 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:46.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.443 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:46.702 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:46.702 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0f7c90cd-b038-4055-abdb-1a8aed688554 -a 10.0.0.2 -s 4420 -i 4 00:13:46.702 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:46.702 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:46.702 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.702 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:46.702 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:46.702 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:49.236 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:49.236 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:49.236 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:49.236 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:49.236 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:49.236 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:49.236 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:49.236 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:49.236 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:49.236 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:49.236 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:49.236 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.236 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:49.236 [ 0]:0x1 00:13:49.236 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:49.236 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.236 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f61423b59774cec9f73c664d765b426 00:13:49.236 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f61423b59774cec9f73c664d765b426 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.236 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:49.237 [ 1]:0x2 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c864d408462146bcb6e4f34c274e3087 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c864d408462146bcb6e4f34c274e3087 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:49.237 [ 0]:0x2 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c864d408462146bcb6e4f34c274e3087 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c864d408462146bcb6e4f34c274e3087 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:49.237 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:49.496 [2024-12-11 09:51:58.910474] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:49.496 request: 00:13:49.496 { 00:13:49.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:49.496 "nsid": 2, 00:13:49.496 "host": "nqn.2016-06.io.spdk:host1", 00:13:49.496 "method": "nvmf_ns_remove_host", 00:13:49.496 "req_id": 1 00:13:49.496 } 00:13:49.496 Got JSON-RPC error response 00:13:49.496 response: 00:13:49.496 { 00:13:49.496 "code": -32602, 00:13:49.496 "message": "Invalid parameters" 00:13:49.496 } 00:13:49.496 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:49.496 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:49.496 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:49.496 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:49.496 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:49.496 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:49.496 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:49.496 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:49.496 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.496 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:49.496 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.496 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:49.496 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.496 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:49.496 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:49.496 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.496 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:49.496 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.496 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:49.496 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:49.496 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:49.496 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:49.496 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:49.496 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.496 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:49.496 [ 0]:0x2 00:13:49.496 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:49.496 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.755 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c864d408462146bcb6e4f34c274e3087 00:13:49.755 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c864d408462146bcb6e4f34c274e3087 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.755 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:49.755 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:49.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.755 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=34538 00:13:49.755 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:49.755 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.755 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 34538 /var/tmp/host.sock 00:13:49.755 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 34538 ']' 00:13:49.755 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:49.755 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.755 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:49.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:49.755 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.755 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:49.755 [2024-12-11 09:51:59.291637] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:13:49.755 [2024-12-11 09:51:59.291682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid34538 ] 00:13:50.014 [2024-12-11 09:51:59.370954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.014 [2024-12-11 09:51:59.410639] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.272 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.272 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:50.272 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.272 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:50.530 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 9986d14f-f9a3-41c1-a86a-0de626cdd643 00:13:50.530 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:50.530 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9986D14FF9A341C1A86A0DE626CDD643 -i 00:13:50.789 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 5d315389-180f-43d1-a8a5-8a09cedfaf9f 00:13:50.789 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:50.789 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 5D315389180F43D1A8A58A09CEDFAF9F -i 00:13:51.047 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:51.306 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:51.306 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:51.306 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:51.872 nvme0n1 00:13:51.872 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:51.872 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:52.130 nvme1n2 00:13:52.130 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:52.130 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:52.130 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:52.130 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:52.130 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:52.388 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:52.388 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:52.388 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:52.388 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:52.647 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 9986d14f-f9a3-41c1-a86a-0de626cdd643 == \9\9\8\6\d\1\4\f\-\f\9\a\3\-\4\1\c\1\-\a\8\6\a\-\0\d\e\6\2\6\c\d\d\6\4\3 ]] 00:13:52.647 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:52.647 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:52.647 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:52.647 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 5d315389-180f-43d1-a8a5-8a09cedfaf9f == \5\d\3\1\5\3\8\9\-\1\8\0\f\-\4\3\d\1\-\a\8\a\5\-\8\a\0\9\c\e\d\f\a\f\9\f ]] 00:13:52.647 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.906 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:53.164 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 9986d14f-f9a3-41c1-a86a-0de626cdd643 00:13:53.164 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:53.164 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9986D14FF9A341C1A86A0DE626CDD643 00:13:53.164 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:53.164 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9986D14FF9A341C1A86A0DE626CDD643 00:13:53.164 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:53.164 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.164 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:53.164 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.164 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:53.164 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.164 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:53.164 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:53.164 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9986D14FF9A341C1A86A0DE626CDD643 00:13:53.422 [2024-12-11 09:52:02.745006] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:53.422 [2024-12-11 09:52:02.745037] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:53.422 [2024-12-11 09:52:02.745045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.422 request: 00:13:53.422 { 00:13:53.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.422 "namespace": { 00:13:53.422 "bdev_name": "invalid", 00:13:53.422 "nsid": 1, 00:13:53.422 "nguid": "9986D14FF9A341C1A86A0DE626CDD643", 00:13:53.422 "no_auto_visible": false, 00:13:53.422 "hide_metadata": false 00:13:53.422 }, 00:13:53.422 "method": "nvmf_subsystem_add_ns", 00:13:53.422 "req_id": 1 00:13:53.422 } 00:13:53.422 Got JSON-RPC error response 00:13:53.422 response: 00:13:53.422 { 00:13:53.422 "code": -32602, 00:13:53.422 "message": "Invalid parameters" 00:13:53.422 } 00:13:53.422 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:53.422 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:53.422 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:53.422 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:53.422 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 9986d14f-f9a3-41c1-a86a-0de626cdd643 00:13:53.422 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:53.422 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9986D14FF9A341C1A86A0DE626CDD643 -i 00:13:53.422 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:55.954 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:55.954 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:55.954 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:55.954 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:55.954 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 34538 00:13:55.954 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 34538 ']' 00:13:55.954 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 34538 00:13:55.954 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:55.954 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.954 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 34538 00:13:55.954 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:55.954 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:55.954 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 34538' 00:13:55.954 killing process with pid 34538 00:13:55.954 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 34538 00:13:55.954 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 34538 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:56.212 rmmod nvme_tcp 00:13:56.212 rmmod nvme_fabrics 00:13:56.212 rmmod nvme_keyring 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 32550 ']' 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 32550 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 32550 ']' 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 32550 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.212 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 32550 00:13:56.471 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:56.471 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:56.471 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 32550' 00:13:56.471 killing process with pid 32550 00:13:56.471 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 32550 00:13:56.471 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 32550 00:13:56.471 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:56.471 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:56.471 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:56.471 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:56.471 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:56.471 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:56.471 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:56.471 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:56.471 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:56.471 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.471 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.471 09:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.004 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:59.004 00:13:59.004 real 0m27.709s 00:13:59.004 user 0m32.390s 00:13:59.004 sys 0m7.768s 00:13:59.004 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.004 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:59.004 ************************************ 00:13:59.004 END TEST nvmf_ns_masking 00:13:59.004 ************************************ 00:13:59.004 09:52:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:59.004 09:52:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:59.005 ************************************ 00:13:59.005 START TEST nvmf_nvme_cli 00:13:59.005 ************************************ 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:59.005 * Looking for test storage... 00:13:59.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:59.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.005 --rc genhtml_branch_coverage=1 00:13:59.005 --rc genhtml_function_coverage=1 00:13:59.005 --rc genhtml_legend=1 00:13:59.005 --rc geninfo_all_blocks=1 00:13:59.005 --rc geninfo_unexecuted_blocks=1 00:13:59.005 00:13:59.005 ' 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:59.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.005 --rc genhtml_branch_coverage=1 00:13:59.005 --rc genhtml_function_coverage=1 00:13:59.005 --rc genhtml_legend=1 00:13:59.005 --rc geninfo_all_blocks=1 00:13:59.005 --rc geninfo_unexecuted_blocks=1 00:13:59.005 00:13:59.005 ' 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:59.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.005 --rc genhtml_branch_coverage=1 00:13:59.005 --rc genhtml_function_coverage=1 00:13:59.005 --rc genhtml_legend=1 00:13:59.005 --rc geninfo_all_blocks=1 00:13:59.005 --rc geninfo_unexecuted_blocks=1 00:13:59.005 00:13:59.005 ' 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:59.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.005 --rc genhtml_branch_coverage=1 00:13:59.005 --rc genhtml_function_coverage=1 00:13:59.005 --rc genhtml_legend=1 00:13:59.005 --rc geninfo_all_blocks=1 00:13:59.005 --rc geninfo_unexecuted_blocks=1 00:13:59.005 00:13:59.005 ' 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.005 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:59.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:59.006 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:05.571 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:05.571 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:05.571 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:05.572 Found net devices under 0000:af:00.0: cvl_0_0 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:05.572 Found net devices under 0000:af:00.1: cvl_0_1 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:05.572 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:05.572 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:05.572 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:05.572 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:05.572 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:05.572 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:05.572 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:05.572 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:05.572 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:05.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:14:05.572 00:14:05.572 --- 10.0.0.2 ping statistics --- 00:14:05.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.572 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:14:05.572 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:05.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:14:05.572 00:14:05.572 --- 10.0.0.1 ping statistics --- 00:14:05.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.572 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:14:05.572 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.572 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:05.572 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:05.572 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.572 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:05.572 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:05.572 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.572 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:05.572 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:05.831 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:05.831 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:05.831 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:05.831 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:05.831 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=39601 00:14:05.831 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:05.831 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 39601 00:14:05.831 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 39601 ']' 00:14:05.831 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.831 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.831 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.831 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.831 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:05.831 [2024-12-11 09:52:15.235649] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:14:05.831 [2024-12-11 09:52:15.235694] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.831 [2024-12-11 09:52:15.319005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:05.831 [2024-12-11 09:52:15.362736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.831 [2024-12-11 09:52:15.362770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.831 [2024-12-11 09:52:15.362777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.831 [2024-12-11 09:52:15.362782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.831 [2024-12-11 09:52:15.362788] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.831 [2024-12-11 09:52:15.364231] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.831 [2024-12-11 09:52:15.364307] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.831 [2024-12-11 09:52:15.364413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.831 [2024-12-11 09:52:15.364413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:06.089 [2024-12-11 09:52:15.498357] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:06.089 Malloc0 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:06.089 Malloc1 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:06.089 [2024-12-11 09:52:15.589599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:06.089 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.090 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:06.090 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.090 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:06.348 00:14:06.348 Discovery Log Number of Records 2, Generation counter 2 00:14:06.348 =====Discovery Log Entry 0====== 00:14:06.348 trtype: tcp 00:14:06.348 adrfam: ipv4 00:14:06.348 subtype: current discovery subsystem 00:14:06.348 treq: not required 00:14:06.348 portid: 0 00:14:06.348 trsvcid: 4420 00:14:06.348 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:06.348 traddr: 10.0.0.2 00:14:06.348 eflags: explicit discovery connections, duplicate discovery information 00:14:06.348 sectype: none 00:14:06.348 =====Discovery Log Entry 1====== 00:14:06.348 trtype: tcp 00:14:06.348 adrfam: ipv4 00:14:06.348 subtype: nvme subsystem 00:14:06.348 treq: not required 00:14:06.348 portid: 0 00:14:06.348 trsvcid: 4420 00:14:06.348 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:06.348 traddr: 10.0.0.2 00:14:06.348 eflags: none 00:14:06.348 sectype: none 00:14:06.348 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:06.348 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:06.348 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:06.348 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.348 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:06.348 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:06.348 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.348 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:06.348 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.348 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:06.348 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:14:06.348 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.348 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:06.348 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:14:06.348 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.348 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=2 00:14:06.348 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:07.722 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:07.722 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:07.722 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:07.722 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:07.722 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:07.722 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:09.623 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:09.623 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:09.623 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:09.623 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:09.623 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:09.623 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:09.623 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:09.623 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:09.623 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:09.623 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:09.623 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:09.623 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:09.623 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:09.623 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:09.623 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:09.623 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:09.624 /dev/nvme0n2 00:14:09.624 /dev/nvme1n1 00:14:09.624 /dev/nvme1n2 ]] 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=4 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:09.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:09.624 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:09.883 rmmod nvme_tcp 00:14:09.883 rmmod nvme_fabrics 00:14:09.883 rmmod nvme_keyring 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 39601 ']' 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 39601 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 39601 ']' 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 39601 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 39601 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 39601' 00:14:09.883 killing process with pid 39601 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 39601 00:14:09.883 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 39601 00:14:10.180 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:10.180 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:10.180 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:10.180 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:10.180 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:10.180 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:10.180 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:10.180 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:10.180 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:10.180 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.180 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.180 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.168 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:12.168 00:14:12.168 real 0m13.436s 00:14:12.168 user 0m18.582s 00:14:12.168 sys 0m5.727s 00:14:12.168 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.168 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:12.168 ************************************ 00:14:12.168 END TEST nvmf_nvme_cli 00:14:12.168 ************************************ 00:14:12.168 09:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:12.168 09:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:12.168 09:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:12.168 09:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.168 09:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:12.168 ************************************ 00:14:12.168 START TEST nvmf_vfio_user 00:14:12.168 ************************************ 00:14:12.168 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:12.428 * Looking for test storage... 00:14:12.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:12.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.428 --rc genhtml_branch_coverage=1 00:14:12.428 --rc genhtml_function_coverage=1 00:14:12.428 --rc genhtml_legend=1 00:14:12.428 --rc geninfo_all_blocks=1 00:14:12.428 --rc geninfo_unexecuted_blocks=1 00:14:12.428 00:14:12.428 ' 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:12.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.428 --rc genhtml_branch_coverage=1 00:14:12.428 --rc genhtml_function_coverage=1 00:14:12.428 --rc genhtml_legend=1 00:14:12.428 --rc geninfo_all_blocks=1 00:14:12.428 --rc geninfo_unexecuted_blocks=1 00:14:12.428 00:14:12.428 ' 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:12.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.428 --rc genhtml_branch_coverage=1 00:14:12.428 --rc genhtml_function_coverage=1 00:14:12.428 --rc genhtml_legend=1 00:14:12.428 --rc geninfo_all_blocks=1 00:14:12.428 --rc geninfo_unexecuted_blocks=1 00:14:12.428 00:14:12.428 ' 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:12.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.428 --rc genhtml_branch_coverage=1 00:14:12.428 --rc genhtml_function_coverage=1 00:14:12.428 --rc genhtml_legend=1 00:14:12.428 --rc geninfo_all_blocks=1 00:14:12.428 --rc geninfo_unexecuted_blocks=1 00:14:12.428 00:14:12.428 ' 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:12.428 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:12.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=40776 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 40776' 00:14:12.429 Process pid: 40776 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 40776 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 40776 ']' 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.429 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:12.429 [2024-12-11 09:52:21.951402] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:14:12.429 [2024-12-11 09:52:21.951451] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.688 [2024-12-11 09:52:22.033438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.688 [2024-12-11 09:52:22.075648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.688 [2024-12-11 09:52:22.075680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.688 [2024-12-11 09:52:22.075687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.688 [2024-12-11 09:52:22.075693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.688 [2024-12-11 09:52:22.075698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.688 [2024-12-11 09:52:22.077090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.688 [2024-12-11 09:52:22.077186] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.688 [2024-12-11 09:52:22.077313] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.688 [2024-12-11 09:52:22.077313] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:14:12.688 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.688 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:12.688 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:13.623 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:13.882 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:13.882 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:13.882 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:13.882 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:13.882 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:14.140 Malloc1 00:14:14.140 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:14.399 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:14.657 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:14.915 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:14.915 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:14.915 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:14.915 Malloc2 00:14:14.915 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:15.174 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:15.433 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:15.693 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:15.693 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:15.693 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:15.693 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:15.693 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:15.693 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:15.693 [2024-12-11 09:52:25.075601] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:14:15.693 [2024-12-11 09:52:25.075634] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41405 ] 00:14:15.693 [2024-12-11 09:52:25.113039] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:15.693 [2024-12-11 09:52:25.125604] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:15.693 [2024-12-11 09:52:25.125628] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9a52a5f000 00:14:15.693 [2024-12-11 09:52:25.126598] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:15.693 [2024-12-11 09:52:25.127598] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:15.693 [2024-12-11 09:52:25.128603] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:15.693 [2024-12-11 09:52:25.129610] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:15.693 [2024-12-11 09:52:25.130614] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:15.693 [2024-12-11 09:52:25.131617] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:15.693 [2024-12-11 09:52:25.132621] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:15.693 [2024-12-11 09:52:25.133625] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:15.693 [2024-12-11 09:52:25.134634] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:15.693 [2024-12-11 09:52:25.134644] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9a52a54000 00:14:15.693 [2024-12-11 09:52:25.135559] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:15.693 [2024-12-11 09:52:25.145009] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:15.693 [2024-12-11 09:52:25.145035] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:15.693 [2024-12-11 09:52:25.149746] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:15.693 [2024-12-11 09:52:25.149779] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:15.693 [2024-12-11 09:52:25.149853] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:15.693 [2024-12-11 09:52:25.149868] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:15.693 [2024-12-11 09:52:25.149873] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:15.693 [2024-12-11 09:52:25.150743] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:15.693 [2024-12-11 09:52:25.150752] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:15.693 [2024-12-11 09:52:25.150758] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:15.693 [2024-12-11 09:52:25.151745] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:15.693 [2024-12-11 09:52:25.151752] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:15.693 [2024-12-11 09:52:25.151759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:15.693 [2024-12-11 09:52:25.152751] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:15.693 [2024-12-11 09:52:25.152758] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:15.693 [2024-12-11 09:52:25.153754] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:15.693 [2024-12-11 09:52:25.153762] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:15.693 [2024-12-11 09:52:25.153766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:15.693 [2024-12-11 09:52:25.153772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:15.693 [2024-12-11 09:52:25.153879] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:15.693 [2024-12-11 09:52:25.153884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:15.693 [2024-12-11 09:52:25.153888] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:15.693 [2024-12-11 09:52:25.154769] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:15.693 [2024-12-11 09:52:25.155773] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:15.693 [2024-12-11 09:52:25.156781] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:15.693 [2024-12-11 09:52:25.157776] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:15.693 [2024-12-11 09:52:25.157839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:15.693 [2024-12-11 09:52:25.158794] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:15.693 [2024-12-11 09:52:25.158801] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:15.693 [2024-12-11 09:52:25.158806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.158822] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:15.693 [2024-12-11 09:52:25.158829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.158840] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:15.693 [2024-12-11 09:52:25.158844] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:15.693 [2024-12-11 09:52:25.158847] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:15.693 [2024-12-11 09:52:25.158859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:15.693 [2024-12-11 09:52:25.158900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:15.693 [2024-12-11 09:52:25.158909] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:15.693 [2024-12-11 09:52:25.158913] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:15.693 [2024-12-11 09:52:25.158917] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:15.693 [2024-12-11 09:52:25.158921] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:15.693 [2024-12-11 09:52:25.158925] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:15.693 [2024-12-11 09:52:25.158929] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:15.693 [2024-12-11 09:52:25.158933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.158941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.158951] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:15.693 [2024-12-11 09:52:25.158968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:15.693 [2024-12-11 09:52:25.158977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:15.693 [2024-12-11 09:52:25.158985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:15.693 [2024-12-11 09:52:25.158992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:15.693 [2024-12-11 09:52:25.159000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:15.693 [2024-12-11 09:52:25.159005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.159012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.159021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:15.693 [2024-12-11 09:52:25.159028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:15.693 [2024-12-11 09:52:25.159033] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:15.693 [2024-12-11 09:52:25.159038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.159044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.159049] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.159056] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:15.693 [2024-12-11 09:52:25.159071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:15.693 [2024-12-11 09:52:25.159118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.159126] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.159133] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:15.693 [2024-12-11 09:52:25.159137] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:15.693 [2024-12-11 09:52:25.159140] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:15.693 [2024-12-11 09:52:25.159146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:15.693 [2024-12-11 09:52:25.159159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:15.693 [2024-12-11 09:52:25.159166] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:15.693 [2024-12-11 09:52:25.159176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.159183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.159189] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:15.693 [2024-12-11 09:52:25.159193] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:15.693 [2024-12-11 09:52:25.159196] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:15.693 [2024-12-11 09:52:25.159201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:15.693 [2024-12-11 09:52:25.159229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:15.693 [2024-12-11 09:52:25.159240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.159247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.159253] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:15.693 [2024-12-11 09:52:25.159256] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:15.693 [2024-12-11 09:52:25.159259] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:15.693 [2024-12-11 09:52:25.159265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:15.693 [2024-12-11 09:52:25.159276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:15.693 [2024-12-11 09:52:25.159283] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.159288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.159295] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.159300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.159304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.159308] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.159313] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:15.693 [2024-12-11 09:52:25.159316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:15.693 [2024-12-11 09:52:25.159321] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:15.693 [2024-12-11 09:52:25.159337] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:15.693 [2024-12-11 09:52:25.159347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:15.693 [2024-12-11 09:52:25.159358] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:15.693 [2024-12-11 09:52:25.159368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:15.693 [2024-12-11 09:52:25.159378] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:15.693 [2024-12-11 09:52:25.159386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:15.693 [2024-12-11 09:52:25.159395] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:15.693 [2024-12-11 09:52:25.159406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:15.693 [2024-12-11 09:52:25.159418] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:15.693 [2024-12-11 09:52:25.159422] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:15.693 [2024-12-11 09:52:25.159425] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:15.693 [2024-12-11 09:52:25.159428] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:15.693 [2024-12-11 09:52:25.159431] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:15.693 [2024-12-11 09:52:25.159437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:15.694 [2024-12-11 09:52:25.159444] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:15.694 [2024-12-11 09:52:25.159447] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:15.694 [2024-12-11 09:52:25.159450] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:15.694 [2024-12-11 09:52:25.159456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:15.694 [2024-12-11 09:52:25.159462] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:15.694 [2024-12-11 09:52:25.159465] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:15.694 [2024-12-11 09:52:25.159468] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:15.694 [2024-12-11 09:52:25.159473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:15.694 [2024-12-11 09:52:25.159480] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:15.694 [2024-12-11 09:52:25.159483] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:15.694 [2024-12-11 09:52:25.159486] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:15.694 [2024-12-11 09:52:25.159491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:15.694 [2024-12-11 09:52:25.159498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:15.694 [2024-12-11 09:52:25.159509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:15.694 [2024-12-11 09:52:25.159518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:15.694 [2024-12-11 09:52:25.159524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:15.694 ===================================================== 00:14:15.694 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:15.694 ===================================================== 00:14:15.694 Controller Capabilities/Features 00:14:15.694 ================================ 00:14:15.694 Vendor ID: 4e58 00:14:15.694 Subsystem Vendor ID: 4e58 00:14:15.694 Serial Number: SPDK1 00:14:15.694 Model Number: SPDK bdev Controller 00:14:15.694 Firmware Version: 25.01 00:14:15.694 Recommended Arb Burst: 6 00:14:15.694 IEEE OUI Identifier: 8d 6b 50 00:14:15.694 Multi-path I/O 00:14:15.694 May have multiple subsystem ports: Yes 00:14:15.694 May have multiple controllers: Yes 00:14:15.694 Associated with SR-IOV VF: No 00:14:15.694 Max Data Transfer Size: 131072 00:14:15.694 Max Number of Namespaces: 32 00:14:15.694 Max Number of I/O Queues: 127 00:14:15.694 NVMe Specification Version (VS): 1.3 00:14:15.694 NVMe Specification Version (Identify): 1.3 00:14:15.694 Maximum Queue Entries: 256 00:14:15.694 Contiguous Queues Required: Yes 00:14:15.694 Arbitration Mechanisms Supported 00:14:15.694 Weighted Round Robin: Not Supported 00:14:15.694 Vendor Specific: Not Supported 00:14:15.694 Reset Timeout: 15000 ms 00:14:15.694 Doorbell Stride: 4 bytes 00:14:15.694 NVM Subsystem Reset: Not Supported 00:14:15.694 Command Sets Supported 00:14:15.694 NVM Command Set: Supported 00:14:15.694 Boot Partition: Not Supported 00:14:15.694 Memory Page Size Minimum: 4096 bytes 00:14:15.694 Memory Page Size Maximum: 4096 bytes 00:14:15.694 Persistent Memory Region: Not Supported 00:14:15.694 Optional Asynchronous Events Supported 00:14:15.694 Namespace Attribute Notices: Supported 00:14:15.694 Firmware Activation Notices: Not Supported 00:14:15.694 ANA Change Notices: Not Supported 00:14:15.694 PLE Aggregate Log Change Notices: Not Supported 00:14:15.694 LBA Status Info Alert Notices: Not Supported 00:14:15.694 EGE Aggregate Log Change Notices: Not Supported 00:14:15.694 Normal NVM Subsystem Shutdown event: Not Supported 00:14:15.694 Zone Descriptor Change Notices: Not Supported 00:14:15.694 Discovery Log Change Notices: Not Supported 00:14:15.694 Controller Attributes 00:14:15.694 128-bit Host Identifier: Supported 00:14:15.694 Non-Operational Permissive Mode: Not Supported 00:14:15.694 NVM Sets: Not Supported 00:14:15.694 Read Recovery Levels: Not Supported 00:14:15.694 Endurance Groups: Not Supported 00:14:15.694 Predictable Latency Mode: Not Supported 00:14:15.694 Traffic Based Keep ALive: Not Supported 00:14:15.694 Namespace Granularity: Not Supported 00:14:15.694 SQ Associations: Not Supported 00:14:15.694 UUID List: Not Supported 00:14:15.694 Multi-Domain Subsystem: Not Supported 00:14:15.694 Fixed Capacity Management: Not Supported 00:14:15.694 Variable Capacity Management: Not Supported 00:14:15.694 Delete Endurance Group: Not Supported 00:14:15.694 Delete NVM Set: Not Supported 00:14:15.694 Extended LBA Formats Supported: Not Supported 00:14:15.694 Flexible Data Placement Supported: Not Supported 00:14:15.694 00:14:15.694 Controller Memory Buffer Support 00:14:15.694 ================================ 00:14:15.694 Supported: No 00:14:15.694 00:14:15.694 Persistent Memory Region Support 00:14:15.694 ================================ 00:14:15.694 Supported: No 00:14:15.694 00:14:15.694 Admin Command Set Attributes 00:14:15.694 ============================ 00:14:15.694 Security Send/Receive: Not Supported 00:14:15.694 Format NVM: Not Supported 00:14:15.694 Firmware Activate/Download: Not Supported 00:14:15.694 Namespace Management: Not Supported 00:14:15.694 Device Self-Test: Not Supported 00:14:15.694 Directives: Not Supported 00:14:15.694 NVMe-MI: Not Supported 00:14:15.694 Virtualization Management: Not Supported 00:14:15.694 Doorbell Buffer Config: Not Supported 00:14:15.694 Get LBA Status Capability: Not Supported 00:14:15.694 Command & Feature Lockdown Capability: Not Supported 00:14:15.694 Abort Command Limit: 4 00:14:15.694 Async Event Request Limit: 4 00:14:15.694 Number of Firmware Slots: N/A 00:14:15.694 Firmware Slot 1 Read-Only: N/A 00:14:15.694 Firmware Activation Without Reset: N/A 00:14:15.694 Multiple Update Detection Support: N/A 00:14:15.694 Firmware Update Granularity: No Information Provided 00:14:15.694 Per-Namespace SMART Log: No 00:14:15.694 Asymmetric Namespace Access Log Page: Not Supported 00:14:15.694 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:15.694 Command Effects Log Page: Supported 00:14:15.694 Get Log Page Extended Data: Supported 00:14:15.694 Telemetry Log Pages: Not Supported 00:14:15.694 Persistent Event Log Pages: Not Supported 00:14:15.694 Supported Log Pages Log Page: May Support 00:14:15.694 Commands Supported & Effects Log Page: Not Supported 00:14:15.694 Feature Identifiers & Effects Log Page:May Support 00:14:15.694 NVMe-MI Commands & Effects Log Page: May Support 00:14:15.694 Data Area 4 for Telemetry Log: Not Supported 00:14:15.694 Error Log Page Entries Supported: 128 00:14:15.694 Keep Alive: Supported 00:14:15.694 Keep Alive Granularity: 10000 ms 00:14:15.694 00:14:15.694 NVM Command Set Attributes 00:14:15.694 ========================== 00:14:15.694 Submission Queue Entry Size 00:14:15.694 Max: 64 00:14:15.694 Min: 64 00:14:15.694 Completion Queue Entry Size 00:14:15.694 Max: 16 00:14:15.694 Min: 16 00:14:15.694 Number of Namespaces: 32 00:14:15.694 Compare Command: Supported 00:14:15.694 Write Uncorrectable Command: Not Supported 00:14:15.694 Dataset Management Command: Supported 00:14:15.694 Write Zeroes Command: Supported 00:14:15.694 Set Features Save Field: Not Supported 00:14:15.694 Reservations: Not Supported 00:14:15.694 Timestamp: Not Supported 00:14:15.694 Copy: Supported 00:14:15.694 Volatile Write Cache: Present 00:14:15.694 Atomic Write Unit (Normal): 1 00:14:15.694 Atomic Write Unit (PFail): 1 00:14:15.694 Atomic Compare & Write Unit: 1 00:14:15.694 Fused Compare & Write: Supported 00:14:15.694 Scatter-Gather List 00:14:15.694 SGL Command Set: Supported (Dword aligned) 00:14:15.694 SGL Keyed: Not Supported 00:14:15.694 SGL Bit Bucket Descriptor: Not Supported 00:14:15.694 SGL Metadata Pointer: Not Supported 00:14:15.694 Oversized SGL: Not Supported 00:14:15.694 SGL Metadata Address: Not Supported 00:14:15.694 SGL Offset: Not Supported 00:14:15.694 Transport SGL Data Block: Not Supported 00:14:15.694 Replay Protected Memory Block: Not Supported 00:14:15.694 00:14:15.694 Firmware Slot Information 00:14:15.694 ========================= 00:14:15.694 Active slot: 1 00:14:15.694 Slot 1 Firmware Revision: 25.01 00:14:15.694 00:14:15.694 00:14:15.694 Commands Supported and Effects 00:14:15.694 ============================== 00:14:15.694 Admin Commands 00:14:15.694 -------------- 00:14:15.694 Get Log Page (02h): Supported 00:14:15.694 Identify (06h): Supported 00:14:15.694 Abort (08h): Supported 00:14:15.694 Set Features (09h): Supported 00:14:15.694 Get Features (0Ah): Supported 00:14:15.694 Asynchronous Event Request (0Ch): Supported 00:14:15.694 Keep Alive (18h): Supported 00:14:15.694 I/O Commands 00:14:15.694 ------------ 00:14:15.694 Flush (00h): Supported LBA-Change 00:14:15.694 Write (01h): Supported LBA-Change 00:14:15.694 Read (02h): Supported 00:14:15.694 Compare (05h): Supported 00:14:15.694 Write Zeroes (08h): Supported LBA-Change 00:14:15.694 Dataset Management (09h): Supported LBA-Change 00:14:15.694 Copy (19h): Supported LBA-Change 00:14:15.694 00:14:15.694 Error Log 00:14:15.694 ========= 00:14:15.694 00:14:15.694 Arbitration 00:14:15.694 =========== 00:14:15.694 Arbitration Burst: 1 00:14:15.694 00:14:15.694 Power Management 00:14:15.694 ================ 00:14:15.694 Number of Power States: 1 00:14:15.694 Current Power State: Power State #0 00:14:15.694 Power State #0: 00:14:15.694 Max Power: 0.00 W 00:14:15.694 Non-Operational State: Operational 00:14:15.694 Entry Latency: Not Reported 00:14:15.694 Exit Latency: Not Reported 00:14:15.694 Relative Read Throughput: 0 00:14:15.694 Relative Read Latency: 0 00:14:15.694 Relative Write Throughput: 0 00:14:15.694 Relative Write Latency: 0 00:14:15.694 Idle Power: Not Reported 00:14:15.694 Active Power: Not Reported 00:14:15.694 Non-Operational Permissive Mode: Not Supported 00:14:15.694 00:14:15.694 Health Information 00:14:15.694 ================== 00:14:15.694 Critical Warnings: 00:14:15.694 Available Spare Space: OK 00:14:15.694 Temperature: OK 00:14:15.694 Device Reliability: OK 00:14:15.694 Read Only: No 00:14:15.694 Volatile Memory Backup: OK 00:14:15.694 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:15.694 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:15.694 Available Spare: 0% 00:14:15.694 Available Sp[2024-12-11 09:52:25.159602] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:15.694 [2024-12-11 09:52:25.159611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:15.694 [2024-12-11 09:52:25.159636] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:15.694 [2024-12-11 09:52:25.159644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.694 [2024-12-11 09:52:25.159650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.694 [2024-12-11 09:52:25.159655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.694 [2024-12-11 09:52:25.159661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.694 [2024-12-11 09:52:25.163028] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:15.694 [2024-12-11 09:52:25.163038] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:15.694 [2024-12-11 09:52:25.163824] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:15.694 [2024-12-11 09:52:25.163871] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:15.694 [2024-12-11 09:52:25.163877] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:15.694 [2024-12-11 09:52:25.164829] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:15.694 [2024-12-11 09:52:25.164838] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:15.694 [2024-12-11 09:52:25.164887] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:15.695 [2024-12-11 09:52:25.165850] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:15.695 are Threshold: 0% 00:14:15.695 Life Percentage Used: 0% 00:14:15.695 Data Units Read: 0 00:14:15.695 Data Units Written: 0 00:14:15.695 Host Read Commands: 0 00:14:15.695 Host Write Commands: 0 00:14:15.695 Controller Busy Time: 0 minutes 00:14:15.695 Power Cycles: 0 00:14:15.695 Power On Hours: 0 hours 00:14:15.695 Unsafe Shutdowns: 0 00:14:15.695 Unrecoverable Media Errors: 0 00:14:15.695 Lifetime Error Log Entries: 0 00:14:15.695 Warning Temperature Time: 0 minutes 00:14:15.695 Critical Temperature Time: 0 minutes 00:14:15.695 00:14:15.695 Number of Queues 00:14:15.695 ================ 00:14:15.695 Number of I/O Submission Queues: 127 00:14:15.695 Number of I/O Completion Queues: 127 00:14:15.695 00:14:15.695 Active Namespaces 00:14:15.695 ================= 00:14:15.695 Namespace ID:1 00:14:15.695 Error Recovery Timeout: Unlimited 00:14:15.695 Command Set Identifier: NVM (00h) 00:14:15.695 Deallocate: Supported 00:14:15.695 Deallocated/Unwritten Error: Not Supported 00:14:15.695 Deallocated Read Value: Unknown 00:14:15.695 Deallocate in Write Zeroes: Not Supported 00:14:15.695 Deallocated Guard Field: 0xFFFF 00:14:15.695 Flush: Supported 00:14:15.695 Reservation: Supported 00:14:15.695 Namespace Sharing Capabilities: Multiple Controllers 00:14:15.695 Size (in LBAs): 131072 (0GiB) 00:14:15.695 Capacity (in LBAs): 131072 (0GiB) 00:14:15.695 Utilization (in LBAs): 131072 (0GiB) 00:14:15.695 NGUID: B9313B44F1AF4D46816AE2F92B34DC71 00:14:15.695 UUID: b9313b44-f1af-4d46-816a-e2f92b34dc71 00:14:15.695 Thin Provisioning: Not Supported 00:14:15.695 Per-NS Atomic Units: Yes 00:14:15.695 Atomic Boundary Size (Normal): 0 00:14:15.695 Atomic Boundary Size (PFail): 0 00:14:15.695 Atomic Boundary Offset: 0 00:14:15.695 Maximum Single Source Range Length: 65535 00:14:15.695 Maximum Copy Length: 65535 00:14:15.695 Maximum Source Range Count: 1 00:14:15.695 NGUID/EUI64 Never Reused: No 00:14:15.695 Namespace Write Protected: No 00:14:15.695 Number of LBA Formats: 1 00:14:15.695 Current LBA Format: LBA Format #00 00:14:15.695 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:15.695 00:14:15.695 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:15.952 [2024-12-11 09:52:25.391043] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:21.217 Initializing NVMe Controllers 00:14:21.217 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:21.217 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:21.217 Initialization complete. Launching workers. 00:14:21.217 ======================================================== 00:14:21.217 Latency(us) 00:14:21.217 Device Information : IOPS MiB/s Average min max 00:14:21.217 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39942.15 156.02 3204.47 961.87 9598.72 00:14:21.217 ======================================================== 00:14:21.217 Total : 39942.15 156.02 3204.47 961.87 9598.72 00:14:21.217 00:14:21.217 [2024-12-11 09:52:30.408531] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:21.217 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:21.217 [2024-12-11 09:52:30.641621] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:26.483 Initializing NVMe Controllers 00:14:26.483 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:26.483 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:26.483 Initialization complete. Launching workers. 00:14:26.483 ======================================================== 00:14:26.483 Latency(us) 00:14:26.483 Device Information : IOPS MiB/s Average min max 00:14:26.483 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7984.40 7585.27 10975.76 00:14:26.483 ======================================================== 00:14:26.483 Total : 16051.20 62.70 7984.40 7585.27 10975.76 00:14:26.483 00:14:26.484 [2024-12-11 09:52:35.678409] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:26.484 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:26.484 [2024-12-11 09:52:35.891381] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:31.752 [2024-12-11 09:52:40.966500] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:31.752 Initializing NVMe Controllers 00:14:31.752 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:31.752 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:31.752 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:31.752 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:31.752 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:31.752 Initialization complete. Launching workers. 00:14:31.752 Starting thread on core 2 00:14:31.752 Starting thread on core 3 00:14:31.752 Starting thread on core 1 00:14:31.752 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:31.752 [2024-12-11 09:52:41.264397] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:35.036 [2024-12-11 09:52:44.348424] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:35.036 Initializing NVMe Controllers 00:14:35.036 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:35.036 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:35.036 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:35.036 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:35.036 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:35.036 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:35.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:35.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:35.036 Initialization complete. Launching workers. 00:14:35.036 Starting thread on core 1 with urgent priority queue 00:14:35.036 Starting thread on core 2 with urgent priority queue 00:14:35.036 Starting thread on core 3 with urgent priority queue 00:14:35.036 Starting thread on core 0 with urgent priority queue 00:14:35.036 SPDK bdev Controller (SPDK1 ) core 0: 535.00 IO/s 186.92 secs/100000 ios 00:14:35.036 SPDK bdev Controller (SPDK1 ) core 1: 685.00 IO/s 145.99 secs/100000 ios 00:14:35.036 SPDK bdev Controller (SPDK1 ) core 2: 500.67 IO/s 199.73 secs/100000 ios 00:14:35.036 SPDK bdev Controller (SPDK1 ) core 3: 453.67 IO/s 220.43 secs/100000 ios 00:14:35.036 ======================================================== 00:14:35.036 00:14:35.036 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:35.294 [2024-12-11 09:52:44.650688] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:35.294 Initializing NVMe Controllers 00:14:35.294 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:35.294 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:35.294 Namespace ID: 1 size: 0GB 00:14:35.294 Initialization complete. 00:14:35.294 INFO: using host memory buffer for IO 00:14:35.294 Hello world! 00:14:35.294 [2024-12-11 09:52:44.684915] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:35.294 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:35.552 [2024-12-11 09:52:44.975597] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:36.487 Initializing NVMe Controllers 00:14:36.487 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:36.487 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:36.487 Initialization complete. Launching workers. 00:14:36.487 submit (in ns) avg, min, max = 7050.0, 3111.4, 4001067.6 00:14:36.487 complete (in ns) avg, min, max = 20274.6, 1726.7, 4025587.6 00:14:36.487 00:14:36.487 Submit histogram 00:14:36.487 ================ 00:14:36.487 Range in us Cumulative Count 00:14:36.487 3.109 - 3.124: 0.0677% ( 11) 00:14:36.487 3.124 - 3.139: 0.1970% ( 21) 00:14:36.487 3.139 - 3.154: 0.5110% ( 51) 00:14:36.487 3.154 - 3.170: 0.7942% ( 46) 00:14:36.487 3.170 - 3.185: 1.2313% ( 71) 00:14:36.487 3.185 - 3.200: 2.3210% ( 177) 00:14:36.487 3.200 - 3.215: 5.7009% ( 549) 00:14:36.487 3.215 - 3.230: 11.1063% ( 878) 00:14:36.487 3.230 - 3.246: 16.2593% ( 837) 00:14:36.487 3.246 - 3.261: 22.4897% ( 1012) 00:14:36.487 3.261 - 3.276: 29.3111% ( 1108) 00:14:36.487 3.276 - 3.291: 35.4676% ( 1000) 00:14:36.487 3.291 - 3.307: 41.2670% ( 942) 00:14:36.487 3.307 - 3.322: 46.5123% ( 852) 00:14:36.487 3.322 - 3.337: 52.0163% ( 894) 00:14:36.487 3.337 - 3.352: 57.1323% ( 831) 00:14:36.487 3.352 - 3.368: 64.5139% ( 1199) 00:14:36.488 3.368 - 3.383: 70.7259% ( 1009) 00:14:36.488 3.383 - 3.398: 76.1374% ( 879) 00:14:36.488 3.398 - 3.413: 81.0257% ( 794) 00:14:36.488 3.413 - 3.429: 83.7592% ( 444) 00:14:36.488 3.429 - 3.444: 86.0001% ( 364) 00:14:36.488 3.444 - 3.459: 87.0221% ( 166) 00:14:36.488 3.459 - 3.474: 87.5823% ( 91) 00:14:36.488 3.474 - 3.490: 88.0626% ( 78) 00:14:36.488 3.490 - 3.505: 88.5058% ( 72) 00:14:36.488 3.505 - 3.520: 89.0845% ( 94) 00:14:36.488 3.520 - 3.535: 89.9157% ( 135) 00:14:36.488 3.535 - 3.550: 90.7960% ( 143) 00:14:36.488 3.550 - 3.566: 91.5902% ( 129) 00:14:36.488 3.566 - 3.581: 92.5199% ( 151) 00:14:36.488 3.581 - 3.596: 93.4987% ( 159) 00:14:36.488 3.596 - 3.611: 94.4838% ( 160) 00:14:36.488 3.611 - 3.627: 95.4750% ( 161) 00:14:36.488 3.627 - 3.642: 96.2630% ( 128) 00:14:36.488 3.642 - 3.657: 97.1003% ( 136) 00:14:36.488 3.657 - 3.672: 97.7283% ( 102) 00:14:36.488 3.672 - 3.688: 98.2269% ( 81) 00:14:36.488 3.688 - 3.703: 98.5902% ( 59) 00:14:36.488 3.703 - 3.718: 98.8857% ( 48) 00:14:36.488 3.718 - 3.733: 99.1012% ( 35) 00:14:36.488 3.733 - 3.749: 99.2366% ( 22) 00:14:36.488 3.749 - 3.764: 99.3659% ( 21) 00:14:36.488 3.764 - 3.779: 99.4398% ( 12) 00:14:36.488 3.779 - 3.794: 99.4829% ( 7) 00:14:36.488 3.794 - 3.810: 99.5013% ( 3) 00:14:36.488 3.810 - 3.825: 99.5136% ( 2) 00:14:36.488 3.825 - 3.840: 99.5198% ( 1) 00:14:36.488 3.840 - 3.855: 99.5259% ( 1) 00:14:36.488 3.855 - 3.870: 99.5321% ( 1) 00:14:36.488 3.870 - 3.886: 99.5383% ( 1) 00:14:36.488 3.886 - 3.901: 99.5567% ( 3) 00:14:36.488 3.901 - 3.931: 99.5752% ( 3) 00:14:36.488 3.931 - 3.962: 99.5814% ( 1) 00:14:36.488 3.992 - 4.023: 99.5937% ( 2) 00:14:36.488 4.023 - 4.053: 99.5998% ( 1) 00:14:36.488 4.084 - 4.114: 99.6060% ( 1) 00:14:36.488 4.114 - 4.145: 99.6121% ( 1) 00:14:36.488 4.145 - 4.175: 99.6183% ( 1) 00:14:36.488 4.998 - 5.029: 99.6245% ( 1) 00:14:36.488 5.181 - 5.211: 99.6306% ( 1) 00:14:36.488 5.242 - 5.272: 99.6368% ( 1) 00:14:36.488 5.272 - 5.303: 99.6429% ( 1) 00:14:36.488 5.425 - 5.455: 99.6491% ( 1) 00:14:36.488 5.516 - 5.547: 99.6552% ( 1) 00:14:36.488 5.577 - 5.608: 99.6675% ( 2) 00:14:36.488 5.699 - 5.730: 99.6737% ( 1) 00:14:36.488 5.730 - 5.760: 99.6799% ( 1) 00:14:36.488 5.790 - 5.821: 99.6860% ( 1) 00:14:36.488 5.943 - 5.973: 99.6922% ( 1) 00:14:36.488 6.034 - 6.065: 99.7045% ( 2) 00:14:36.488 6.248 - 6.278: 99.7106% ( 1) 00:14:36.488 6.278 - 6.309: 99.7291% ( 3) 00:14:36.488 6.339 - 6.370: 99.7353% ( 1) 00:14:36.488 6.430 - 6.461: 99.7414% ( 1) 00:14:36.488 6.461 - 6.491: 99.7537% ( 2) 00:14:36.488 6.491 - 6.522: 99.7599% ( 1) 00:14:36.488 6.522 - 6.552: 99.7661% ( 1) 00:14:36.488 6.552 - 6.583: 99.7722% ( 1) 00:14:36.488 6.674 - 6.705: 99.7845% ( 2) 00:14:36.488 6.735 - 6.766: 99.7907% ( 1) 00:14:36.488 6.766 - 6.796: 99.7968% ( 1) 00:14:36.488 6.827 - 6.857: 99.8030% ( 1) 00:14:36.488 6.949 - 6.979: 99.8215% ( 3) 00:14:36.488 6.979 - 7.010: 99.8276% ( 1) 00:14:36.488 7.040 - 7.070: 99.8338% ( 1) 00:14:36.488 7.070 - 7.101: 99.8399% ( 1) 00:14:36.488 7.223 - 7.253: 99.8461% ( 1) 00:14:36.488 7.345 - 7.375: 99.8522% ( 1) 00:14:36.488 7.467 - 7.497: 99.8584% ( 1) 00:14:36.488 7.771 - 7.802: 99.8646% ( 1) 00:14:36.488 7.924 - 7.985: 99.8707% ( 1) 00:14:36.488 8.107 - 8.168: 99.8769% ( 1) 00:14:36.488 13.592 - 13.653: 99.8830% ( 1) 00:14:36.488 13.775 - 13.836: 99.8892% ( 1) 00:14:36.488 13.836 - 13.897: 99.8953% ( 1) 00:14:36.488 19.017 - 19.139: 99.9015% ( 1) 00:14:36.488 19.383 - 19.505: 99.9077% ( 1) 00:14:36.488 3994.575 - 4025.783: 100.0000% ( 15) 00:14:36.488 00:14:36.488 Complete histogram 00:14:36.488 ================== 00:14:36.488 Range in us Cumulative Count 00:14:36.488 1.722 - 1.730: 0.0308% ( 5) 00:14:36.488 1.730 - 1.737: 0.2339% ( 33) 00:14:36.488 1.737 - 1.745: 1.0097% ( 126) 00:14:36.488 1.745 - 1.752: 1.8469% ( 136) 00:14:36.488 1.752 - 1.760: 2.2902% ( 72) 00:14:36.488 1.760 - 1.768: 2.5857% ( 48) 00:14:36.488 1.768 - 1.775: 2.9305% ( 56) 00:14:36.488 1.775 - 1.783: 5.1715% ( 364) 00:14:36.488 1.783 - 1.790: 13.3473% ( 1328) 00:14:36.488 1.790 - 1.798: 27.6796% ( 2328) 00:14:36.488 1.798 - 1.806: 39.7587% ( 1962) 00:14:36.488 1.806 - 1.813: 46.1306% ( 1035) 00:14:36.488 1.813 - 1.821: 50.2370% ( 667) 00:14:36.488 1.821 - 1.829: 56.3751% ( 997) 00:14:36.488 1.829 - 1.836: 68.1524% ( 1913) 00:14:36.488 1.836 - 1.844: 81.2658% ( 2130) 00:14:36.488 1.844 - 1.851: 89.3677% ( 1316) 00:14:36.488 1.851 - 1.859: 93.7081% ( 705) 00:14:36.488 1.859 - 1.867: 96.4292% ( 442) 00:14:36.488 1.867 - 1.874: 98.1161% ( 274) 00:14:36.488 1.874 - 1.882: 98.7133% ( 97) 00:14:36.488 1.882 - 1.890: 98.9165% ( 33) 00:14:36.488 1.890 - 1.897: 99.0150% ( 16) 00:14:36.488 1.897 - 1.905: 99.0519% ( 6) 00:14:36.488 1.905 - 1.912: 99.0888% ( 6) 00:14:36.488 1.920 - 1.928: 99.1012% ( 2) 00:14:36.488 1.928 - 1.935: 99.1073% ( 1) 00:14:36.488 1.943 - 1.950: 99.1135% ( 1) 00:14:36.488 1.950 - 1.966: 99.1258% ( 2) 00:14:36.488 1.966 - 1.981: 99.1442% ( 3) 00:14:36.488 1.981 - 1.996: 99.1566% ( 2) 00:14:36.488 1.996 - 2.011: 99.1627% ( 1) 00:14:36.488 2.011 - 2.027: 99.1873% ( 4) 00:14:36.488 2.042 - 2.057: 99.2120% ( 4) 00:14:36.488 2.057 - 2.072: 99.2243% ( 2) 00:14:36.488 2.072 - 2.088: 99.2304% ( 1) 00:14:36.488 2.088 - 2.103: 99.2366% ( 1) 00:14:36.488 2.103 - 2.118: 99.2428% ( 1) 00:14:36.488 2.118 - 2.133: 99.2551% ( 2) 00:14:36.488 2.133 - 2.149: 99.2674% ( 2) 00:14:36.488 2.149 - 2.164: 99.2797% ( 2) 00:14:36.488 2.164 - 2.179: 99.2858% ( 1) 00:14:36.488 2.194 - 2.210: 99.2920% ( 1) 00:14:36.488 2.210 - 2.225: 99.3105% ( 3) 00:14:36.488 2.270 - 2.286: 99.3228% ( 2) 00:14:36.488 2.316 - 2.331: 99.3351% ( 2) 00:14:36.488 2.331 - 2.347: 99.3474% ( 2) 00:14:36.488 2.377 - 2.392: 99.3536% ( 1) 00:14:36.488 3.398 - 3.413: 99.3597% ( 1) 00:14:36.488 3.581 - 3.596: 99.3659% ( 1) 00:14:36.488 3.657 - 3.672: 99.3720% ( 1) 00:14:36.488 3.962 - 3.992: 99.3782% ( 1) 00:14:36.488 3.992 - 4.023: 99.3844% ( 1) 00:14:36.488 4.084 - 4.114: 99.3967% ( 2) 00:14:36.488 4.206 - 4.236: 99.4028% ( 1) 00:14:36.488 4.389 - 4.419: 99.4090% ( 1) 00:14:36.488 4.419 - 4.450: 99.4151% ( 1) 00:14:36.488 4.510 - 4.541: 99.4213% ( 1) 00:14:36.488 4.541 - 4.571: 99.4336% ( 2) 00:14:36.488 4.632 - 4.663: 99.4398% ( 1) 00:14:36.488 5.242 - 5.272: 99.4459% ( 1) 00:14:36.488 5.272 - 5.303: 99.4521% ( 1) 00:14:36.488 5.455 - 5.486: 99.4582% ( 1) 00:14:36.488 5.669 - 5.699: 99.4644% ( 1) 00:14:36.488 5.882 - 5.912: 99.4705% ( 1) 00:14:36.488 6.156 - 6.1[2024-12-11 09:52:45.997670] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:36.488 87: 99.4767% ( 1) 00:14:36.488 6.217 - 6.248: 99.4829% ( 1) 00:14:36.488 7.741 - 7.771: 99.4890% ( 1) 00:14:36.488 10.789 - 10.850: 99.4952% ( 1) 00:14:36.488 12.130 - 12.190: 99.5013% ( 1) 00:14:36.488 12.190 - 12.251: 99.5198% ( 3) 00:14:36.488 14.811 - 14.872: 99.5259% ( 1) 00:14:36.488 17.432 - 17.554: 99.5383% ( 2) 00:14:36.488 3994.575 - 4025.783: 100.0000% ( 75) 00:14:36.488 00:14:36.488 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:36.488 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:36.488 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:36.488 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:36.488 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:36.747 [ 00:14:36.747 { 00:14:36.747 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:36.747 "subtype": "Discovery", 00:14:36.747 "listen_addresses": [], 00:14:36.747 "allow_any_host": true, 00:14:36.747 "hosts": [] 00:14:36.747 }, 00:14:36.747 { 00:14:36.747 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:36.747 "subtype": "NVMe", 00:14:36.747 "listen_addresses": [ 00:14:36.747 { 00:14:36.747 "trtype": "VFIOUSER", 00:14:36.747 "adrfam": "IPv4", 00:14:36.747 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:36.747 "trsvcid": "0" 00:14:36.747 } 00:14:36.747 ], 00:14:36.747 "allow_any_host": true, 00:14:36.747 "hosts": [], 00:14:36.747 "serial_number": "SPDK1", 00:14:36.747 "model_number": "SPDK bdev Controller", 00:14:36.747 "max_namespaces": 32, 00:14:36.747 "min_cntlid": 1, 00:14:36.747 "max_cntlid": 65519, 00:14:36.747 "namespaces": [ 00:14:36.747 { 00:14:36.747 "nsid": 1, 00:14:36.747 "bdev_name": "Malloc1", 00:14:36.747 "name": "Malloc1", 00:14:36.747 "nguid": "B9313B44F1AF4D46816AE2F92B34DC71", 00:14:36.747 "uuid": "b9313b44-f1af-4d46-816a-e2f92b34dc71" 00:14:36.747 } 00:14:36.747 ] 00:14:36.747 }, 00:14:36.747 { 00:14:36.747 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:36.747 "subtype": "NVMe", 00:14:36.747 "listen_addresses": [ 00:14:36.747 { 00:14:36.747 "trtype": "VFIOUSER", 00:14:36.747 "adrfam": "IPv4", 00:14:36.747 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:36.747 "trsvcid": "0" 00:14:36.747 } 00:14:36.747 ], 00:14:36.747 "allow_any_host": true, 00:14:36.747 "hosts": [], 00:14:36.747 "serial_number": "SPDK2", 00:14:36.747 "model_number": "SPDK bdev Controller", 00:14:36.747 "max_namespaces": 32, 00:14:36.747 "min_cntlid": 1, 00:14:36.747 "max_cntlid": 65519, 00:14:36.747 "namespaces": [ 00:14:36.747 { 00:14:36.747 "nsid": 1, 00:14:36.747 "bdev_name": "Malloc2", 00:14:36.747 "name": "Malloc2", 00:14:36.747 "nguid": "357EB7147FA347D5822493E266F1C3F1", 00:14:36.747 "uuid": "357eb714-7fa3-47d5-8224-93e266f1c3f1" 00:14:36.747 } 00:14:36.747 ] 00:14:36.747 } 00:14:36.747 ] 00:14:36.747 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:36.747 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=44863 00:14:36.747 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:36.747 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:36.747 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:36.747 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:36.747 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:36.747 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:36.747 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:36.747 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:37.006 [2024-12-11 09:52:46.401817] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:37.006 Malloc3 00:14:37.006 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:37.265 [2024-12-11 09:52:46.629484] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:37.265 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:37.265 Asynchronous Event Request test 00:14:37.265 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:37.265 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:37.265 Registering asynchronous event callbacks... 00:14:37.265 Starting namespace attribute notice tests for all controllers... 00:14:37.265 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:37.265 aer_cb - Changed Namespace 00:14:37.265 Cleaning up... 00:14:37.265 [ 00:14:37.265 { 00:14:37.265 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:37.265 "subtype": "Discovery", 00:14:37.265 "listen_addresses": [], 00:14:37.265 "allow_any_host": true, 00:14:37.265 "hosts": [] 00:14:37.265 }, 00:14:37.265 { 00:14:37.265 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:37.265 "subtype": "NVMe", 00:14:37.265 "listen_addresses": [ 00:14:37.265 { 00:14:37.265 "trtype": "VFIOUSER", 00:14:37.265 "adrfam": "IPv4", 00:14:37.265 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:37.265 "trsvcid": "0" 00:14:37.265 } 00:14:37.265 ], 00:14:37.265 "allow_any_host": true, 00:14:37.265 "hosts": [], 00:14:37.265 "serial_number": "SPDK1", 00:14:37.265 "model_number": "SPDK bdev Controller", 00:14:37.265 "max_namespaces": 32, 00:14:37.265 "min_cntlid": 1, 00:14:37.265 "max_cntlid": 65519, 00:14:37.265 "namespaces": [ 00:14:37.265 { 00:14:37.265 "nsid": 1, 00:14:37.265 "bdev_name": "Malloc1", 00:14:37.265 "name": "Malloc1", 00:14:37.265 "nguid": "B9313B44F1AF4D46816AE2F92B34DC71", 00:14:37.265 "uuid": "b9313b44-f1af-4d46-816a-e2f92b34dc71" 00:14:37.265 }, 00:14:37.265 { 00:14:37.265 "nsid": 2, 00:14:37.265 "bdev_name": "Malloc3", 00:14:37.265 "name": "Malloc3", 00:14:37.265 "nguid": "F054221D37AE42D9937DDDFED8ECA8C1", 00:14:37.265 "uuid": "f054221d-37ae-42d9-937d-ddfed8eca8c1" 00:14:37.265 } 00:14:37.265 ] 00:14:37.265 }, 00:14:37.265 { 00:14:37.265 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:37.265 "subtype": "NVMe", 00:14:37.265 "listen_addresses": [ 00:14:37.265 { 00:14:37.265 "trtype": "VFIOUSER", 00:14:37.265 "adrfam": "IPv4", 00:14:37.265 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:37.265 "trsvcid": "0" 00:14:37.265 } 00:14:37.265 ], 00:14:37.265 "allow_any_host": true, 00:14:37.265 "hosts": [], 00:14:37.265 "serial_number": "SPDK2", 00:14:37.265 "model_number": "SPDK bdev Controller", 00:14:37.265 "max_namespaces": 32, 00:14:37.265 "min_cntlid": 1, 00:14:37.265 "max_cntlid": 65519, 00:14:37.265 "namespaces": [ 00:14:37.265 { 00:14:37.265 "nsid": 1, 00:14:37.265 "bdev_name": "Malloc2", 00:14:37.265 "name": "Malloc2", 00:14:37.265 "nguid": "357EB7147FA347D5822493E266F1C3F1", 00:14:37.265 "uuid": "357eb714-7fa3-47d5-8224-93e266f1c3f1" 00:14:37.265 } 00:14:37.265 ] 00:14:37.265 } 00:14:37.265 ] 00:14:37.525 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 44863 00:14:37.525 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:37.525 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:37.525 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:37.525 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:37.525 [2024-12-11 09:52:46.873411] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:14:37.525 [2024-12-11 09:52:46.873445] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44879 ] 00:14:37.525 [2024-12-11 09:52:46.914571] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:37.525 [2024-12-11 09:52:46.916806] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:37.525 [2024-12-11 09:52:46.916829] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6a1bb57000 00:14:37.525 [2024-12-11 09:52:46.917803] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.525 [2024-12-11 09:52:46.918816] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.525 [2024-12-11 09:52:46.919824] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.525 [2024-12-11 09:52:46.921221] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:37.525 [2024-12-11 09:52:46.921841] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:37.525 [2024-12-11 09:52:46.922850] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.525 [2024-12-11 09:52:46.923854] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:37.525 [2024-12-11 09:52:46.924860] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.525 [2024-12-11 09:52:46.925867] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:37.525 [2024-12-11 09:52:46.925877] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6a1bb4c000 00:14:37.525 [2024-12-11 09:52:46.926856] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:37.525 [2024-12-11 09:52:46.939948] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:37.525 [2024-12-11 09:52:46.939973] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:37.525 [2024-12-11 09:52:46.945059] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:37.525 [2024-12-11 09:52:46.945093] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:37.525 [2024-12-11 09:52:46.945163] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:37.525 [2024-12-11 09:52:46.945176] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:37.525 [2024-12-11 09:52:46.945181] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:37.525 [2024-12-11 09:52:46.946055] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:37.525 [2024-12-11 09:52:46.946064] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:37.525 [2024-12-11 09:52:46.946071] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:37.525 [2024-12-11 09:52:46.947060] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:37.525 [2024-12-11 09:52:46.947069] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:37.525 [2024-12-11 09:52:46.947075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:37.525 [2024-12-11 09:52:46.948065] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:37.525 [2024-12-11 09:52:46.948074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:37.525 [2024-12-11 09:52:46.949075] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:37.525 [2024-12-11 09:52:46.949083] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:37.525 [2024-12-11 09:52:46.949088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:37.525 [2024-12-11 09:52:46.949094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:37.525 [2024-12-11 09:52:46.949201] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:37.525 [2024-12-11 09:52:46.949205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:37.525 [2024-12-11 09:52:46.949210] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:37.525 [2024-12-11 09:52:46.950077] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:37.525 [2024-12-11 09:52:46.951087] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:37.525 [2024-12-11 09:52:46.952102] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:37.525 [2024-12-11 09:52:46.953109] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:37.525 [2024-12-11 09:52:46.953145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:37.525 [2024-12-11 09:52:46.954119] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:37.525 [2024-12-11 09:52:46.954128] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:37.525 [2024-12-11 09:52:46.954132] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:37.525 [2024-12-11 09:52:46.954149] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:37.525 [2024-12-11 09:52:46.954159] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:37.525 [2024-12-11 09:52:46.954169] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:37.525 [2024-12-11 09:52:46.954173] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:37.525 [2024-12-11 09:52:46.954176] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.525 [2024-12-11 09:52:46.954187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:37.525 [2024-12-11 09:52:46.962223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:37.525 [2024-12-11 09:52:46.962234] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:37.525 [2024-12-11 09:52:46.962239] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:37.525 [2024-12-11 09:52:46.962242] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:37.525 [2024-12-11 09:52:46.962246] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:37.525 [2024-12-11 09:52:46.962251] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:37.525 [2024-12-11 09:52:46.962255] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:37.526 [2024-12-11 09:52:46.962259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:46.962268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:46.962278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:37.526 [2024-12-11 09:52:46.970223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:37.526 [2024-12-11 09:52:46.970236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.526 [2024-12-11 09:52:46.970244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.526 [2024-12-11 09:52:46.970251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.526 [2024-12-11 09:52:46.970258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.526 [2024-12-11 09:52:46.970263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:46.970273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:46.970281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:37.526 [2024-12-11 09:52:46.978222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:37.526 [2024-12-11 09:52:46.978229] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:37.526 [2024-12-11 09:52:46.978233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:46.978239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:46.978244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:46.978252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:37.526 [2024-12-11 09:52:46.986221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:37.526 [2024-12-11 09:52:46.986272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:46.986281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:46.986288] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:37.526 [2024-12-11 09:52:46.986292] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:37.526 [2024-12-11 09:52:46.986295] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.526 [2024-12-11 09:52:46.986301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:37.526 [2024-12-11 09:52:46.994223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:37.526 [2024-12-11 09:52:46.994232] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:37.526 [2024-12-11 09:52:46.994240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:46.994247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:46.994253] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:37.526 [2024-12-11 09:52:46.994259] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:37.526 [2024-12-11 09:52:46.994262] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.526 [2024-12-11 09:52:46.994268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:37.526 [2024-12-11 09:52:47.002223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:37.526 [2024-12-11 09:52:47.002237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:47.002244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:47.002251] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:37.526 [2024-12-11 09:52:47.002255] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:37.526 [2024-12-11 09:52:47.002258] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.526 [2024-12-11 09:52:47.002264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:37.526 [2024-12-11 09:52:47.010222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:37.526 [2024-12-11 09:52:47.010231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:47.010237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:47.010244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:47.010249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:47.010254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:47.010258] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:47.010262] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:37.526 [2024-12-11 09:52:47.010266] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:37.526 [2024-12-11 09:52:47.010271] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:37.526 [2024-12-11 09:52:47.010285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:37.526 [2024-12-11 09:52:47.018225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:37.526 [2024-12-11 09:52:47.018239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:37.526 [2024-12-11 09:52:47.026221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:37.526 [2024-12-11 09:52:47.026234] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:37.526 [2024-12-11 09:52:47.034223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:37.526 [2024-12-11 09:52:47.034240] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:37.526 [2024-12-11 09:52:47.042222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:37.526 [2024-12-11 09:52:47.042239] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:37.526 [2024-12-11 09:52:47.042243] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:37.526 [2024-12-11 09:52:47.042246] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:37.526 [2024-12-11 09:52:47.042250] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:37.526 [2024-12-11 09:52:47.042253] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:37.526 [2024-12-11 09:52:47.042259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:37.526 [2024-12-11 09:52:47.042267] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:37.526 [2024-12-11 09:52:47.042273] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:37.526 [2024-12-11 09:52:47.042278] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.526 [2024-12-11 09:52:47.042285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:37.526 [2024-12-11 09:52:47.042294] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:37.526 [2024-12-11 09:52:47.042300] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:37.526 [2024-12-11 09:52:47.042303] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.526 [2024-12-11 09:52:47.042309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:37.526 [2024-12-11 09:52:47.042319] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:37.526 [2024-12-11 09:52:47.042325] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:37.526 [2024-12-11 09:52:47.042329] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.526 [2024-12-11 09:52:47.042336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:37.526 [2024-12-11 09:52:47.050224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:37.526 [2024-12-11 09:52:47.050239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:37.526 [2024-12-11 09:52:47.050248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:37.526 [2024-12-11 09:52:47.050254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:37.526 ===================================================== 00:14:37.526 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:37.526 ===================================================== 00:14:37.526 Controller Capabilities/Features 00:14:37.526 ================================ 00:14:37.526 Vendor ID: 4e58 00:14:37.526 Subsystem Vendor ID: 4e58 00:14:37.526 Serial Number: SPDK2 00:14:37.527 Model Number: SPDK bdev Controller 00:14:37.527 Firmware Version: 25.01 00:14:37.527 Recommended Arb Burst: 6 00:14:37.527 IEEE OUI Identifier: 8d 6b 50 00:14:37.527 Multi-path I/O 00:14:37.527 May have multiple subsystem ports: Yes 00:14:37.527 May have multiple controllers: Yes 00:14:37.527 Associated with SR-IOV VF: No 00:14:37.527 Max Data Transfer Size: 131072 00:14:37.527 Max Number of Namespaces: 32 00:14:37.527 Max Number of I/O Queues: 127 00:14:37.527 NVMe Specification Version (VS): 1.3 00:14:37.527 NVMe Specification Version (Identify): 1.3 00:14:37.527 Maximum Queue Entries: 256 00:14:37.527 Contiguous Queues Required: Yes 00:14:37.527 Arbitration Mechanisms Supported 00:14:37.527 Weighted Round Robin: Not Supported 00:14:37.527 Vendor Specific: Not Supported 00:14:37.527 Reset Timeout: 15000 ms 00:14:37.527 Doorbell Stride: 4 bytes 00:14:37.527 NVM Subsystem Reset: Not Supported 00:14:37.527 Command Sets Supported 00:14:37.527 NVM Command Set: Supported 00:14:37.527 Boot Partition: Not Supported 00:14:37.527 Memory Page Size Minimum: 4096 bytes 00:14:37.527 Memory Page Size Maximum: 4096 bytes 00:14:37.527 Persistent Memory Region: Not Supported 00:14:37.527 Optional Asynchronous Events Supported 00:14:37.527 Namespace Attribute Notices: Supported 00:14:37.527 Firmware Activation Notices: Not Supported 00:14:37.527 ANA Change Notices: Not Supported 00:14:37.527 PLE Aggregate Log Change Notices: Not Supported 00:14:37.527 LBA Status Info Alert Notices: Not Supported 00:14:37.527 EGE Aggregate Log Change Notices: Not Supported 00:14:37.527 Normal NVM Subsystem Shutdown event: Not Supported 00:14:37.527 Zone Descriptor Change Notices: Not Supported 00:14:37.527 Discovery Log Change Notices: Not Supported 00:14:37.527 Controller Attributes 00:14:37.527 128-bit Host Identifier: Supported 00:14:37.527 Non-Operational Permissive Mode: Not Supported 00:14:37.527 NVM Sets: Not Supported 00:14:37.527 Read Recovery Levels: Not Supported 00:14:37.527 Endurance Groups: Not Supported 00:14:37.527 Predictable Latency Mode: Not Supported 00:14:37.527 Traffic Based Keep ALive: Not Supported 00:14:37.527 Namespace Granularity: Not Supported 00:14:37.527 SQ Associations: Not Supported 00:14:37.527 UUID List: Not Supported 00:14:37.527 Multi-Domain Subsystem: Not Supported 00:14:37.527 Fixed Capacity Management: Not Supported 00:14:37.527 Variable Capacity Management: Not Supported 00:14:37.527 Delete Endurance Group: Not Supported 00:14:37.527 Delete NVM Set: Not Supported 00:14:37.527 Extended LBA Formats Supported: Not Supported 00:14:37.527 Flexible Data Placement Supported: Not Supported 00:14:37.527 00:14:37.527 Controller Memory Buffer Support 00:14:37.527 ================================ 00:14:37.527 Supported: No 00:14:37.527 00:14:37.527 Persistent Memory Region Support 00:14:37.527 ================================ 00:14:37.527 Supported: No 00:14:37.527 00:14:37.527 Admin Command Set Attributes 00:14:37.527 ============================ 00:14:37.527 Security Send/Receive: Not Supported 00:14:37.527 Format NVM: Not Supported 00:14:37.527 Firmware Activate/Download: Not Supported 00:14:37.527 Namespace Management: Not Supported 00:14:37.527 Device Self-Test: Not Supported 00:14:37.527 Directives: Not Supported 00:14:37.527 NVMe-MI: Not Supported 00:14:37.527 Virtualization Management: Not Supported 00:14:37.527 Doorbell Buffer Config: Not Supported 00:14:37.527 Get LBA Status Capability: Not Supported 00:14:37.527 Command & Feature Lockdown Capability: Not Supported 00:14:37.527 Abort Command Limit: 4 00:14:37.527 Async Event Request Limit: 4 00:14:37.527 Number of Firmware Slots: N/A 00:14:37.527 Firmware Slot 1 Read-Only: N/A 00:14:37.527 Firmware Activation Without Reset: N/A 00:14:37.527 Multiple Update Detection Support: N/A 00:14:37.527 Firmware Update Granularity: No Information Provided 00:14:37.527 Per-Namespace SMART Log: No 00:14:37.527 Asymmetric Namespace Access Log Page: Not Supported 00:14:37.527 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:37.527 Command Effects Log Page: Supported 00:14:37.527 Get Log Page Extended Data: Supported 00:14:37.527 Telemetry Log Pages: Not Supported 00:14:37.527 Persistent Event Log Pages: Not Supported 00:14:37.527 Supported Log Pages Log Page: May Support 00:14:37.527 Commands Supported & Effects Log Page: Not Supported 00:14:37.527 Feature Identifiers & Effects Log Page:May Support 00:14:37.527 NVMe-MI Commands & Effects Log Page: May Support 00:14:37.527 Data Area 4 for Telemetry Log: Not Supported 00:14:37.527 Error Log Page Entries Supported: 128 00:14:37.527 Keep Alive: Supported 00:14:37.527 Keep Alive Granularity: 10000 ms 00:14:37.527 00:14:37.527 NVM Command Set Attributes 00:14:37.527 ========================== 00:14:37.527 Submission Queue Entry Size 00:14:37.527 Max: 64 00:14:37.527 Min: 64 00:14:37.527 Completion Queue Entry Size 00:14:37.527 Max: 16 00:14:37.527 Min: 16 00:14:37.527 Number of Namespaces: 32 00:14:37.527 Compare Command: Supported 00:14:37.527 Write Uncorrectable Command: Not Supported 00:14:37.527 Dataset Management Command: Supported 00:14:37.527 Write Zeroes Command: Supported 00:14:37.527 Set Features Save Field: Not Supported 00:14:37.527 Reservations: Not Supported 00:14:37.527 Timestamp: Not Supported 00:14:37.527 Copy: Supported 00:14:37.527 Volatile Write Cache: Present 00:14:37.527 Atomic Write Unit (Normal): 1 00:14:37.527 Atomic Write Unit (PFail): 1 00:14:37.527 Atomic Compare & Write Unit: 1 00:14:37.527 Fused Compare & Write: Supported 00:14:37.527 Scatter-Gather List 00:14:37.527 SGL Command Set: Supported (Dword aligned) 00:14:37.527 SGL Keyed: Not Supported 00:14:37.527 SGL Bit Bucket Descriptor: Not Supported 00:14:37.527 SGL Metadata Pointer: Not Supported 00:14:37.527 Oversized SGL: Not Supported 00:14:37.527 SGL Metadata Address: Not Supported 00:14:37.527 SGL Offset: Not Supported 00:14:37.527 Transport SGL Data Block: Not Supported 00:14:37.527 Replay Protected Memory Block: Not Supported 00:14:37.527 00:14:37.527 Firmware Slot Information 00:14:37.527 ========================= 00:14:37.527 Active slot: 1 00:14:37.527 Slot 1 Firmware Revision: 25.01 00:14:37.527 00:14:37.527 00:14:37.527 Commands Supported and Effects 00:14:37.527 ============================== 00:14:37.527 Admin Commands 00:14:37.527 -------------- 00:14:37.527 Get Log Page (02h): Supported 00:14:37.527 Identify (06h): Supported 00:14:37.527 Abort (08h): Supported 00:14:37.527 Set Features (09h): Supported 00:14:37.527 Get Features (0Ah): Supported 00:14:37.527 Asynchronous Event Request (0Ch): Supported 00:14:37.527 Keep Alive (18h): Supported 00:14:37.527 I/O Commands 00:14:37.527 ------------ 00:14:37.527 Flush (00h): Supported LBA-Change 00:14:37.527 Write (01h): Supported LBA-Change 00:14:37.527 Read (02h): Supported 00:14:37.527 Compare (05h): Supported 00:14:37.527 Write Zeroes (08h): Supported LBA-Change 00:14:37.527 Dataset Management (09h): Supported LBA-Change 00:14:37.527 Copy (19h): Supported LBA-Change 00:14:37.527 00:14:37.527 Error Log 00:14:37.527 ========= 00:14:37.527 00:14:37.527 Arbitration 00:14:37.527 =========== 00:14:37.527 Arbitration Burst: 1 00:14:37.527 00:14:37.527 Power Management 00:14:37.527 ================ 00:14:37.527 Number of Power States: 1 00:14:37.527 Current Power State: Power State #0 00:14:37.527 Power State #0: 00:14:37.527 Max Power: 0.00 W 00:14:37.527 Non-Operational State: Operational 00:14:37.527 Entry Latency: Not Reported 00:14:37.527 Exit Latency: Not Reported 00:14:37.527 Relative Read Throughput: 0 00:14:37.527 Relative Read Latency: 0 00:14:37.527 Relative Write Throughput: 0 00:14:37.527 Relative Write Latency: 0 00:14:37.527 Idle Power: Not Reported 00:14:37.527 Active Power: Not Reported 00:14:37.527 Non-Operational Permissive Mode: Not Supported 00:14:37.527 00:14:37.527 Health Information 00:14:37.527 ================== 00:14:37.527 Critical Warnings: 00:14:37.527 Available Spare Space: OK 00:14:37.527 Temperature: OK 00:14:37.527 Device Reliability: OK 00:14:37.527 Read Only: No 00:14:37.527 Volatile Memory Backup: OK 00:14:37.527 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:37.527 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:37.527 Available Spare: 0% 00:14:37.527 Available Sp[2024-12-11 09:52:47.050339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:37.527 [2024-12-11 09:52:47.058223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:37.527 [2024-12-11 09:52:47.058254] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:37.527 [2024-12-11 09:52:47.058263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.527 [2024-12-11 09:52:47.058271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.527 [2024-12-11 09:52:47.058276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.528 [2024-12-11 09:52:47.058282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.528 [2024-12-11 09:52:47.058336] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:37.528 [2024-12-11 09:52:47.058347] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:37.528 [2024-12-11 09:52:47.059337] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:37.528 [2024-12-11 09:52:47.059379] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:37.528 [2024-12-11 09:52:47.059388] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:37.528 [2024-12-11 09:52:47.060350] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:37.528 [2024-12-11 09:52:47.060361] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:37.528 [2024-12-11 09:52:47.060409] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:37.528 [2024-12-11 09:52:47.061364] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:37.528 are Threshold: 0% 00:14:37.528 Life Percentage Used: 0% 00:14:37.528 Data Units Read: 0 00:14:37.528 Data Units Written: 0 00:14:37.528 Host Read Commands: 0 00:14:37.528 Host Write Commands: 0 00:14:37.528 Controller Busy Time: 0 minutes 00:14:37.528 Power Cycles: 0 00:14:37.528 Power On Hours: 0 hours 00:14:37.528 Unsafe Shutdowns: 0 00:14:37.528 Unrecoverable Media Errors: 0 00:14:37.528 Lifetime Error Log Entries: 0 00:14:37.528 Warning Temperature Time: 0 minutes 00:14:37.528 Critical Temperature Time: 0 minutes 00:14:37.528 00:14:37.528 Number of Queues 00:14:37.528 ================ 00:14:37.528 Number of I/O Submission Queues: 127 00:14:37.528 Number of I/O Completion Queues: 127 00:14:37.528 00:14:37.528 Active Namespaces 00:14:37.528 ================= 00:14:37.528 Namespace ID:1 00:14:37.528 Error Recovery Timeout: Unlimited 00:14:37.528 Command Set Identifier: NVM (00h) 00:14:37.528 Deallocate: Supported 00:14:37.528 Deallocated/Unwritten Error: Not Supported 00:14:37.528 Deallocated Read Value: Unknown 00:14:37.528 Deallocate in Write Zeroes: Not Supported 00:14:37.528 Deallocated Guard Field: 0xFFFF 00:14:37.528 Flush: Supported 00:14:37.528 Reservation: Supported 00:14:37.528 Namespace Sharing Capabilities: Multiple Controllers 00:14:37.528 Size (in LBAs): 131072 (0GiB) 00:14:37.528 Capacity (in LBAs): 131072 (0GiB) 00:14:37.528 Utilization (in LBAs): 131072 (0GiB) 00:14:37.528 NGUID: 357EB7147FA347D5822493E266F1C3F1 00:14:37.528 UUID: 357eb714-7fa3-47d5-8224-93e266f1c3f1 00:14:37.528 Thin Provisioning: Not Supported 00:14:37.528 Per-NS Atomic Units: Yes 00:14:37.528 Atomic Boundary Size (Normal): 0 00:14:37.528 Atomic Boundary Size (PFail): 0 00:14:37.528 Atomic Boundary Offset: 0 00:14:37.528 Maximum Single Source Range Length: 65535 00:14:37.528 Maximum Copy Length: 65535 00:14:37.528 Maximum Source Range Count: 1 00:14:37.528 NGUID/EUI64 Never Reused: No 00:14:37.528 Namespace Write Protected: No 00:14:37.528 Number of LBA Formats: 1 00:14:37.528 Current LBA Format: LBA Format #00 00:14:37.528 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:37.528 00:14:37.528 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:37.786 [2024-12-11 09:52:47.290473] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:43.053 Initializing NVMe Controllers 00:14:43.053 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:43.053 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:43.053 Initialization complete. Launching workers. 00:14:43.053 ======================================================== 00:14:43.053 Latency(us) 00:14:43.053 Device Information : IOPS MiB/s Average min max 00:14:43.053 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39976.94 156.16 3201.68 967.87 8581.64 00:14:43.053 ======================================================== 00:14:43.053 Total : 39976.94 156.16 3201.68 967.87 8581.64 00:14:43.053 00:14:43.053 [2024-12-11 09:52:52.395494] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:43.053 09:52:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:43.311 [2024-12-11 09:52:52.630181] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:48.579 Initializing NVMe Controllers 00:14:48.579 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:48.579 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:48.579 Initialization complete. Launching workers. 00:14:48.579 ======================================================== 00:14:48.579 Latency(us) 00:14:48.579 Device Information : IOPS MiB/s Average min max 00:14:48.579 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39970.60 156.14 3202.82 995.33 7207.92 00:14:48.579 ======================================================== 00:14:48.579 Total : 39970.60 156.14 3202.82 995.33 7207.92 00:14:48.579 00:14:48.579 [2024-12-11 09:52:57.654860] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:48.579 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:48.579 [2024-12-11 09:52:57.869130] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:53.849 [2024-12-11 09:53:03.003318] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:53.849 Initializing NVMe Controllers 00:14:53.849 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:53.849 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:53.849 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:53.849 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:53.849 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:53.849 Initialization complete. Launching workers. 00:14:53.849 Starting thread on core 2 00:14:53.849 Starting thread on core 3 00:14:53.849 Starting thread on core 1 00:14:53.849 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:53.849 [2024-12-11 09:53:03.298629] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:57.136 [2024-12-11 09:53:06.360448] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:57.136 Initializing NVMe Controllers 00:14:57.136 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:57.136 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:57.136 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:57.136 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:57.136 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:57.136 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:57.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:57.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:57.136 Initialization complete. Launching workers. 00:14:57.136 Starting thread on core 1 with urgent priority queue 00:14:57.136 Starting thread on core 2 with urgent priority queue 00:14:57.136 Starting thread on core 3 with urgent priority queue 00:14:57.136 Starting thread on core 0 with urgent priority queue 00:14:57.136 SPDK bdev Controller (SPDK2 ) core 0: 8361.33 IO/s 11.96 secs/100000 ios 00:14:57.136 SPDK bdev Controller (SPDK2 ) core 1: 8046.67 IO/s 12.43 secs/100000 ios 00:14:57.136 SPDK bdev Controller (SPDK2 ) core 2: 7853.67 IO/s 12.73 secs/100000 ios 00:14:57.136 SPDK bdev Controller (SPDK2 ) core 3: 10632.67 IO/s 9.40 secs/100000 ios 00:14:57.136 ======================================================== 00:14:57.136 00:14:57.136 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:57.136 [2024-12-11 09:53:06.654672] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:57.136 Initializing NVMe Controllers 00:14:57.136 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:57.136 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:57.136 Namespace ID: 1 size: 0GB 00:14:57.136 Initialization complete. 00:14:57.136 INFO: using host memory buffer for IO 00:14:57.136 Hello world! 00:14:57.136 [2024-12-11 09:53:06.664755] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:57.136 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:57.394 [2024-12-11 09:53:06.959613] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:58.770 Initializing NVMe Controllers 00:14:58.770 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:58.770 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:58.770 Initialization complete. Launching workers. 00:14:58.770 submit (in ns) avg, min, max = 7018.7, 3131.4, 3999238.1 00:14:58.770 complete (in ns) avg, min, max = 22035.2, 1744.8, 3998600.0 00:14:58.770 00:14:58.770 Submit histogram 00:14:58.770 ================ 00:14:58.770 Range in us Cumulative Count 00:14:58.770 3.124 - 3.139: 0.0122% ( 2) 00:14:58.770 3.154 - 3.170: 0.0304% ( 3) 00:14:58.770 3.170 - 3.185: 0.0487% ( 3) 00:14:58.770 3.185 - 3.200: 0.3104% ( 43) 00:14:58.770 3.200 - 3.215: 2.0144% ( 280) 00:14:58.770 3.215 - 3.230: 6.7247% ( 774) 00:14:58.770 3.230 - 3.246: 11.8914% ( 849) 00:14:58.770 3.246 - 3.261: 17.9345% ( 993) 00:14:58.770 3.261 - 3.276: 24.9026% ( 1145) 00:14:58.770 3.276 - 3.291: 31.1831% ( 1032) 00:14:58.770 3.291 - 3.307: 36.9219% ( 943) 00:14:58.770 3.307 - 3.322: 42.4111% ( 902) 00:14:58.770 3.322 - 3.337: 47.6874% ( 867) 00:14:58.770 3.337 - 3.352: 52.8481% ( 848) 00:14:58.770 3.352 - 3.368: 57.8323% ( 819) 00:14:58.770 3.368 - 3.383: 66.4679% ( 1419) 00:14:58.770 3.383 - 3.398: 72.6631% ( 1018) 00:14:58.770 3.398 - 3.413: 77.7325% ( 833) 00:14:58.770 3.413 - 3.429: 82.0898% ( 716) 00:14:58.770 3.429 - 3.444: 84.7980% ( 445) 00:14:58.770 3.444 - 3.459: 86.5628% ( 290) 00:14:58.770 3.459 - 3.474: 87.3539% ( 130) 00:14:58.770 3.474 - 3.490: 87.7556% ( 66) 00:14:58.770 3.490 - 3.505: 88.1451% ( 64) 00:14:58.770 3.505 - 3.520: 88.7415% ( 98) 00:14:58.770 3.520 - 3.535: 89.3744% ( 104) 00:14:58.770 3.535 - 3.550: 90.2142% ( 138) 00:14:58.770 3.550 - 3.566: 91.2184% ( 165) 00:14:58.770 3.566 - 3.581: 92.1738% ( 157) 00:14:58.770 3.581 - 3.596: 92.9649% ( 130) 00:14:58.770 3.596 - 3.611: 93.7804% ( 134) 00:14:58.770 3.611 - 3.627: 94.6811% ( 148) 00:14:58.770 3.627 - 3.642: 95.6305% ( 156) 00:14:58.770 3.642 - 3.657: 96.4399% ( 133) 00:14:58.770 3.657 - 3.672: 97.2006% ( 125) 00:14:58.770 3.672 - 3.688: 97.8092% ( 100) 00:14:58.770 3.688 - 3.703: 98.1986% ( 64) 00:14:58.770 3.703 - 3.718: 98.5516% ( 58) 00:14:58.770 3.718 - 3.733: 98.9350% ( 63) 00:14:58.770 3.733 - 3.749: 99.1054% ( 28) 00:14:58.770 3.749 - 3.764: 99.2880% ( 30) 00:14:58.770 3.764 - 3.779: 99.3853% ( 16) 00:14:58.770 3.779 - 3.794: 99.4219% ( 6) 00:14:58.770 3.794 - 3.810: 99.4766% ( 9) 00:14:58.770 3.810 - 3.825: 99.5071% ( 5) 00:14:58.770 3.825 - 3.840: 99.5192% ( 2) 00:14:58.770 3.840 - 3.855: 99.5253% ( 1) 00:14:58.770 3.855 - 3.870: 99.5314% ( 1) 00:14:58.770 3.901 - 3.931: 99.5375% ( 1) 00:14:58.770 3.931 - 3.962: 99.5436% ( 1) 00:14:58.770 3.962 - 3.992: 99.5497% ( 1) 00:14:58.770 4.084 - 4.114: 99.5618% ( 2) 00:14:58.770 4.267 - 4.297: 99.5679% ( 1) 00:14:58.770 4.846 - 4.876: 99.5740% ( 1) 00:14:58.770 4.876 - 4.907: 99.5801% ( 1) 00:14:58.770 4.907 - 4.937: 99.5862% ( 1) 00:14:58.770 4.937 - 4.968: 99.5923% ( 1) 00:14:58.770 4.968 - 4.998: 99.5983% ( 1) 00:14:58.770 5.059 - 5.090: 99.6044% ( 1) 00:14:58.770 5.090 - 5.120: 99.6105% ( 1) 00:14:58.770 5.211 - 5.242: 99.6166% ( 1) 00:14:58.770 5.272 - 5.303: 99.6349% ( 3) 00:14:58.770 5.303 - 5.333: 99.6409% ( 1) 00:14:58.770 5.333 - 5.364: 99.6470% ( 1) 00:14:58.770 5.364 - 5.394: 99.6531% ( 1) 00:14:58.770 5.394 - 5.425: 99.6592% ( 1) 00:14:58.770 5.486 - 5.516: 99.6714% ( 2) 00:14:58.770 5.516 - 5.547: 99.6835% ( 2) 00:14:58.770 5.577 - 5.608: 99.6896% ( 1) 00:14:58.770 5.608 - 5.638: 99.6957% ( 1) 00:14:58.770 5.638 - 5.669: 99.7018% ( 1) 00:14:58.770 5.669 - 5.699: 99.7079% ( 1) 00:14:58.770 5.730 - 5.760: 99.7140% ( 1) 00:14:58.770 5.760 - 5.790: 99.7201% ( 1) 00:14:58.770 5.790 - 5.821: 99.7261% ( 1) 00:14:58.770 5.821 - 5.851: 99.7383% ( 2) 00:14:58.770 5.882 - 5.912: 99.7505% ( 2) 00:14:58.770 5.912 - 5.943: 99.7566% ( 1) 00:14:58.770 5.973 - 6.004: 99.7627% ( 1) 00:14:58.770 6.065 - 6.095: 99.7687% ( 1) 00:14:58.770 6.278 - 6.309: 99.7748% ( 1) 00:14:58.770 6.400 - 6.430: 99.7870% ( 2) 00:14:58.770 6.461 - 6.491: 99.7931% ( 1) 00:14:58.770 6.491 - 6.522: 99.7992% ( 1) 00:14:58.770 6.522 - 6.552: 99.8053% ( 1) 00:14:58.770 6.552 - 6.583: 99.8113% ( 1) 00:14:58.770 6.674 - 6.705: 99.8174% ( 1) 00:14:58.770 6.735 - 6.766: 99.8235% ( 1) 00:14:58.770 6.796 - 6.827: 99.8296% ( 1) 00:14:58.770 6.918 - 6.949: 99.8357% ( 1) 00:14:58.770 7.040 - 7.070: 99.8418% ( 1) 00:14:58.770 7.131 - 7.162: 99.8479% ( 1) 00:14:58.770 7.223 - 7.253: 99.8600% ( 2) 00:14:58.770 7.314 - 7.345: 99.8661% ( 1) 00:14:58.770 7.710 - 7.741: 99.8722% ( 1) 00:14:58.770 8.533 - 8.594: 99.8783% ( 1) 00:14:58.770 8.655 - 8.716: 99.8844% ( 1) 00:14:58.770 13.653 - 13.714: 99.8905% ( 1) 00:14:58.770 19.139 - 19.261: 99.9026% ( 2) 00:14:58.770 19.505 - 19.627: 99.9087% ( 1) 00:14:58.770 3994.575 - 4025.783: 100.0000% ( 15) 00:14:58.770 00:14:58.770 Complete histogram 00:14:58.770 ================== 00:14:58.770 Range in us Cumulative Count 00:14:58.770 1.745 - 1.752: 0.0122% ( 2) 00:14:58.770 1.752 - 1.760: 0.0365% ( 4) 00:14:58.770 1.760 - 1.768: 0.0487% ( 2) 00:14:58.771 1.775 - 1.783: 0.1278% ( 13) 00:14:58.771 1.783 - 1.790: 1.1137% ( 162) 00:14:58.771 1.790 - 1.798: 4.1626% ( 501) 00:14:58.771 1.798 - 1.806: 8.3739% ( 692) 00:14:58.771 1.806 - 1.813: 11.1064% ( 449) 00:14:58.771 1.813 - 1.821: 12.4026% ( 213) 00:14:58.771 1.821 - 1.829: 14.2283% ( 300) 00:14:58.771 1.829 - 1.836: 22.3710% ( 1338) 00:14:58.771 1.836 - 1.844: 44.1760% ( 3583) 00:14:58.771 1.844 - 1.851: 69.2734% ( 4124) 00:14:58.771 1.851 - 1.859: 82.7349% ( 2212) 00:14:58.771 1.859 - 1.867: 89.1796% ( 1059) 00:14:58.771 1.867 - 1.874: 92.5268% ( 550) 00:14:58.771 1.874 - 1.882: 94.7907% ( 372) 00:14:58.771 1.882 - 1.890: 95.8739% ( 178) 00:14:58.771 1.890 - 1.897: 96.3425% ( 77) 00:14:58.771 1.897 - 1.905: 96.7076% ( 60) 00:14:58.771 1.905 - 1.912: 97.1945% ( 80) 00:14:58.771 1.912 - 1.920: 97.7240% ( 87) 00:14:58.771 1.920 - 1.928: 98.1317% ( 67) 00:14:58.771 1.928 - 1.935: 98.5273% ( 65) 00:14:58.771 1.935 - 1.943: 98.7342% ( 34) 00:14:58.771 1.943 - 1.950: 98.8376% ( 17) 00:14:58.771 1.950 - 1.966: 99.0385% ( 33) 00:14:58.771 1.966 - 1.981: 99.1237% ( 14) 00:14:58.771 1.981 - 1.996: 99.1297% ( 1) 00:14:58.771 1.996 - 2.011: 99.1663% ( 6) 00:14:58.771 2.042 - 2.057: 99.2089% ( 7) 00:14:58.771 2.057 - 2.072: 99.2149% ( 1) 00:14:58.771 2.072 - 2.088: 99.2271% ( 2) 00:14:58.771 2.088 - 2.103: 99.2332% ( 1) 00:14:58.771 2.103 - 2.118: 99.2454% ( 2) 00:14:58.771 2.118 - 2.133: 99.2575% ( 2) 00:14:58.771 2.133 - 2.149: 99.2636% ( 1) 00:14:58.771 2.149 - 2.164: 99.2758% ( 2) 00:14:58.771 2.179 - 2.194: 99.2819% ( 1) 00:14:58.771 2.210 - 2.225: 99.2880% ( 1) 00:14:58.771 2.225 - 2.240: 99.3062% ( 3) 00:14:58.771 2.270 - 2.286: 99.3123% ( 1) 00:14:58.771 2.347 - 2.362: 99.3184% ( 1) 00:14:58.771 3.307 - 3.322: 99.3245% ( 1) 00:14:58.771 3.459 - 3.474: 99.3306% ( 1) 00:14:58.771 3.490 - 3.505: 99.3367% ( 1) 00:14:58.771 3.566 - 3.581: 99.3427% ( 1) 00:14:58.771 3.810 - 3.825: 99.3488% ( 1) 00:14:58.771 3.825 - 3.840: 99.3549% ( 1) 00:14:58.771 3.931 - 3.962: 99.3610% ( 1) 00:14:58.771 3.992 - 4.023: 99.3671% ( 1) 00:14:58.771 4.053 - 4.084: 99.3732% ( 1) 00:14:58.771 4.084 - 4.114: 99.3793% ( 1) 00:14:58.771 4.114 - 4.145: 99.3853% ( 1) 00:14:58.771 4.145 - 4.175: 99.3914% ( 1) 00:14:58.771 4.236 - 4.267: 99.3975% ( 1) 00:14:58.771 4.297 - 4.328: 99.4036% ( 1) 00:14:58.771 4.358 - 4.389: 99.4097% ( 1) 00:14:58.771 4.510 - 4.541: 99.4158% ( 1) 00:14:58.771 4.724 - 4.754: 99.4219% ( 1) 00:14:58.771 4.754 - 4.785: 99.4279% ( 1) 00:14:58.771 4.937 - 4.968: 99.4340% ( 1) 00:14:58.771 5.120 - 5.150: 99.4401% ( 1) 00:14:58.771 5.333 - 5.3[2024-12-11 09:53:08.053233] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:58.771 64: 99.4462% ( 1) 00:14:58.771 5.608 - 5.638: 99.4523% ( 1) 00:14:58.771 5.760 - 5.790: 99.4584% ( 1) 00:14:58.771 6.309 - 6.339: 99.4645% ( 1) 00:14:58.771 6.644 - 6.674: 99.4705% ( 1) 00:14:58.771 8.290 - 8.350: 99.4766% ( 1) 00:14:58.771 13.897 - 13.958: 99.4827% ( 1) 00:14:58.771 39.010 - 39.253: 99.4888% ( 1) 00:14:58.771 150.187 - 151.162: 99.4949% ( 1) 00:14:58.771 3978.971 - 3994.575: 99.5071% ( 2) 00:14:58.771 3994.575 - 4025.783: 100.0000% ( 81) 00:14:58.771 00:14:58.771 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:58.771 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:58.771 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:58.771 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:58.771 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:58.771 [ 00:14:58.771 { 00:14:58.771 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:58.771 "subtype": "Discovery", 00:14:58.771 "listen_addresses": [], 00:14:58.771 "allow_any_host": true, 00:14:58.771 "hosts": [] 00:14:58.771 }, 00:14:58.771 { 00:14:58.771 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:58.771 "subtype": "NVMe", 00:14:58.771 "listen_addresses": [ 00:14:58.771 { 00:14:58.771 "trtype": "VFIOUSER", 00:14:58.771 "adrfam": "IPv4", 00:14:58.771 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:58.771 "trsvcid": "0" 00:14:58.771 } 00:14:58.771 ], 00:14:58.771 "allow_any_host": true, 00:14:58.771 "hosts": [], 00:14:58.771 "serial_number": "SPDK1", 00:14:58.771 "model_number": "SPDK bdev Controller", 00:14:58.771 "max_namespaces": 32, 00:14:58.771 "min_cntlid": 1, 00:14:58.771 "max_cntlid": 65519, 00:14:58.771 "namespaces": [ 00:14:58.771 { 00:14:58.771 "nsid": 1, 00:14:58.771 "bdev_name": "Malloc1", 00:14:58.771 "name": "Malloc1", 00:14:58.771 "nguid": "B9313B44F1AF4D46816AE2F92B34DC71", 00:14:58.771 "uuid": "b9313b44-f1af-4d46-816a-e2f92b34dc71" 00:14:58.771 }, 00:14:58.771 { 00:14:58.771 "nsid": 2, 00:14:58.771 "bdev_name": "Malloc3", 00:14:58.771 "name": "Malloc3", 00:14:58.771 "nguid": "F054221D37AE42D9937DDDFED8ECA8C1", 00:14:58.771 "uuid": "f054221d-37ae-42d9-937d-ddfed8eca8c1" 00:14:58.771 } 00:14:58.771 ] 00:14:58.771 }, 00:14:58.771 { 00:14:58.771 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:58.771 "subtype": "NVMe", 00:14:58.771 "listen_addresses": [ 00:14:58.771 { 00:14:58.771 "trtype": "VFIOUSER", 00:14:58.771 "adrfam": "IPv4", 00:14:58.771 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:58.771 "trsvcid": "0" 00:14:58.771 } 00:14:58.771 ], 00:14:58.771 "allow_any_host": true, 00:14:58.771 "hosts": [], 00:14:58.771 "serial_number": "SPDK2", 00:14:58.771 "model_number": "SPDK bdev Controller", 00:14:58.771 "max_namespaces": 32, 00:14:58.771 "min_cntlid": 1, 00:14:58.771 "max_cntlid": 65519, 00:14:58.771 "namespaces": [ 00:14:58.771 { 00:14:58.771 "nsid": 1, 00:14:58.771 "bdev_name": "Malloc2", 00:14:58.771 "name": "Malloc2", 00:14:58.771 "nguid": "357EB7147FA347D5822493E266F1C3F1", 00:14:58.771 "uuid": "357eb714-7fa3-47d5-8224-93e266f1c3f1" 00:14:58.771 } 00:14:58.771 ] 00:14:58.771 } 00:14:58.771 ] 00:14:58.771 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:58.771 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=48494 00:14:58.771 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:58.771 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:58.771 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:58.771 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:58.771 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:58.771 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:58.771 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:58.771 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:59.030 [2024-12-11 09:53:08.472663] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:59.030 Malloc4 00:14:59.030 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:59.288 [2024-12-11 09:53:08.701372] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:59.288 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:59.288 Asynchronous Event Request test 00:14:59.289 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:59.289 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:59.289 Registering asynchronous event callbacks... 00:14:59.289 Starting namespace attribute notice tests for all controllers... 00:14:59.289 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:59.289 aer_cb - Changed Namespace 00:14:59.289 Cleaning up... 00:14:59.547 [ 00:14:59.547 { 00:14:59.547 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:59.547 "subtype": "Discovery", 00:14:59.547 "listen_addresses": [], 00:14:59.547 "allow_any_host": true, 00:14:59.547 "hosts": [] 00:14:59.547 }, 00:14:59.547 { 00:14:59.547 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:59.547 "subtype": "NVMe", 00:14:59.547 "listen_addresses": [ 00:14:59.547 { 00:14:59.547 "trtype": "VFIOUSER", 00:14:59.547 "adrfam": "IPv4", 00:14:59.547 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:59.547 "trsvcid": "0" 00:14:59.547 } 00:14:59.547 ], 00:14:59.547 "allow_any_host": true, 00:14:59.547 "hosts": [], 00:14:59.547 "serial_number": "SPDK1", 00:14:59.547 "model_number": "SPDK bdev Controller", 00:14:59.547 "max_namespaces": 32, 00:14:59.547 "min_cntlid": 1, 00:14:59.547 "max_cntlid": 65519, 00:14:59.547 "namespaces": [ 00:14:59.547 { 00:14:59.547 "nsid": 1, 00:14:59.547 "bdev_name": "Malloc1", 00:14:59.547 "name": "Malloc1", 00:14:59.547 "nguid": "B9313B44F1AF4D46816AE2F92B34DC71", 00:14:59.547 "uuid": "b9313b44-f1af-4d46-816a-e2f92b34dc71" 00:14:59.547 }, 00:14:59.547 { 00:14:59.547 "nsid": 2, 00:14:59.547 "bdev_name": "Malloc3", 00:14:59.547 "name": "Malloc3", 00:14:59.547 "nguid": "F054221D37AE42D9937DDDFED8ECA8C1", 00:14:59.547 "uuid": "f054221d-37ae-42d9-937d-ddfed8eca8c1" 00:14:59.547 } 00:14:59.547 ] 00:14:59.547 }, 00:14:59.547 { 00:14:59.547 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:59.547 "subtype": "NVMe", 00:14:59.547 "listen_addresses": [ 00:14:59.547 { 00:14:59.547 "trtype": "VFIOUSER", 00:14:59.547 "adrfam": "IPv4", 00:14:59.547 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:59.547 "trsvcid": "0" 00:14:59.547 } 00:14:59.547 ], 00:14:59.547 "allow_any_host": true, 00:14:59.547 "hosts": [], 00:14:59.547 "serial_number": "SPDK2", 00:14:59.547 "model_number": "SPDK bdev Controller", 00:14:59.547 "max_namespaces": 32, 00:14:59.547 "min_cntlid": 1, 00:14:59.547 "max_cntlid": 65519, 00:14:59.547 "namespaces": [ 00:14:59.547 { 00:14:59.547 "nsid": 1, 00:14:59.547 "bdev_name": "Malloc2", 00:14:59.547 "name": "Malloc2", 00:14:59.547 "nguid": "357EB7147FA347D5822493E266F1C3F1", 00:14:59.547 "uuid": "357eb714-7fa3-47d5-8224-93e266f1c3f1" 00:14:59.547 }, 00:14:59.547 { 00:14:59.547 "nsid": 2, 00:14:59.547 "bdev_name": "Malloc4", 00:14:59.547 "name": "Malloc4", 00:14:59.547 "nguid": "D76DC48D28EE40C381952B0DEF88708C", 00:14:59.547 "uuid": "d76dc48d-28ee-40c3-8195-2b0def88708c" 00:14:59.547 } 00:14:59.547 ] 00:14:59.547 } 00:14:59.547 ] 00:14:59.547 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 48494 00:14:59.547 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:59.547 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 40776 00:14:59.547 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 40776 ']' 00:14:59.547 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 40776 00:14:59.547 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:59.547 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.547 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 40776 00:14:59.547 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:59.547 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:59.547 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 40776' 00:14:59.547 killing process with pid 40776 00:14:59.548 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 40776 00:14:59.548 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 40776 00:14:59.806 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:59.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:59.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:59.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:59.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:59.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=48519 00:14:59.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 48519' 00:14:59.807 Process pid: 48519 00:14:59.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:59.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:59.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 48519 00:14:59.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 48519 ']' 00:14:59.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:59.807 [2024-12-11 09:53:09.262334] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:59.807 [2024-12-11 09:53:09.263172] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:14:59.807 [2024-12-11 09:53:09.263209] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.807 [2024-12-11 09:53:09.341369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:59.807 [2024-12-11 09:53:09.378164] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.807 [2024-12-11 09:53:09.378201] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.807 [2024-12-11 09:53:09.378209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.807 [2024-12-11 09:53:09.378222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.807 [2024-12-11 09:53:09.378228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.807 [2024-12-11 09:53:09.379752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.807 [2024-12-11 09:53:09.379866] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.807 [2024-12-11 09:53:09.379973] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.807 [2024-12-11 09:53:09.379974] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.066 [2024-12-11 09:53:09.449052] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:00.066 [2024-12-11 09:53:09.449703] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:00.066 [2024-12-11 09:53:09.450084] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:00.066 [2024-12-11 09:53:09.450461] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:00.066 [2024-12-11 09:53:09.450509] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:00.066 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.066 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:00.066 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:01.004 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:01.261 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:01.261 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:01.261 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:01.261 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:01.261 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:01.520 Malloc1 00:15:01.520 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:01.779 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:01.779 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:02.037 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:02.037 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:02.037 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:02.295 Malloc2 00:15:02.295 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:02.553 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:02.553 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:02.812 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:02.812 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 48519 00:15:02.812 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 48519 ']' 00:15:02.812 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 48519 00:15:02.812 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:02.812 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:02.812 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 48519 00:15:02.812 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:02.812 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:02.812 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 48519' 00:15:02.812 killing process with pid 48519 00:15:02.812 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 48519 00:15:02.812 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 48519 00:15:03.071 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:03.071 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:03.071 00:15:03.071 real 0m50.841s 00:15:03.071 user 3m16.689s 00:15:03.071 sys 0m3.291s 00:15:03.071 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:03.071 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:03.071 ************************************ 00:15:03.071 END TEST nvmf_vfio_user 00:15:03.071 ************************************ 00:15:03.071 09:53:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:03.071 09:53:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:03.071 09:53:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.071 09:53:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:03.071 ************************************ 00:15:03.071 START TEST nvmf_vfio_user_nvme_compliance 00:15:03.071 ************************************ 00:15:03.071 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:03.331 * Looking for test storage... 00:15:03.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:03.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.331 --rc genhtml_branch_coverage=1 00:15:03.331 --rc genhtml_function_coverage=1 00:15:03.331 --rc genhtml_legend=1 00:15:03.331 --rc geninfo_all_blocks=1 00:15:03.331 --rc geninfo_unexecuted_blocks=1 00:15:03.331 00:15:03.331 ' 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:03.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.331 --rc genhtml_branch_coverage=1 00:15:03.331 --rc genhtml_function_coverage=1 00:15:03.331 --rc genhtml_legend=1 00:15:03.331 --rc geninfo_all_blocks=1 00:15:03.331 --rc geninfo_unexecuted_blocks=1 00:15:03.331 00:15:03.331 ' 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:03.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.331 --rc genhtml_branch_coverage=1 00:15:03.331 --rc genhtml_function_coverage=1 00:15:03.331 --rc genhtml_legend=1 00:15:03.331 --rc geninfo_all_blocks=1 00:15:03.331 --rc geninfo_unexecuted_blocks=1 00:15:03.331 00:15:03.331 ' 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:03.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.331 --rc genhtml_branch_coverage=1 00:15:03.331 --rc genhtml_function_coverage=1 00:15:03.331 --rc genhtml_legend=1 00:15:03.331 --rc geninfo_all_blocks=1 00:15:03.331 --rc geninfo_unexecuted_blocks=1 00:15:03.331 00:15:03.331 ' 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.331 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:03.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=49268 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 49268' 00:15:03.332 Process pid: 49268 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 49268 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 49268 ']' 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.332 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:03.332 [2024-12-11 09:53:12.856344] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:15:03.332 [2024-12-11 09:53:12.856395] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.591 [2024-12-11 09:53:12.937945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:03.591 [2024-12-11 09:53:12.977918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.591 [2024-12-11 09:53:12.977955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.591 [2024-12-11 09:53:12.977961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.591 [2024-12-11 09:53:12.977967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.591 [2024-12-11 09:53:12.977972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.591 [2024-12-11 09:53:12.979345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.591 [2024-12-11 09:53:12.979454] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.591 [2024-12-11 09:53:12.979456] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.591 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.591 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:03.591 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:04.526 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:04.526 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:04.526 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:04.526 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.526 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:04.526 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.526 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:04.526 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:04.526 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.526 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:04.785 malloc0 00:15:04.785 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.785 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:04.785 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.785 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:04.785 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.785 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:04.785 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.785 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:04.785 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.785 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:04.785 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.785 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:04.785 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.785 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:04.785 00:15:04.785 00:15:04.785 CUnit - A unit testing framework for C - Version 2.1-3 00:15:04.785 http://cunit.sourceforge.net/ 00:15:04.785 00:15:04.785 00:15:04.785 Suite: nvme_compliance 00:15:04.785 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-11 09:53:14.325704] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:04.785 [2024-12-11 09:53:14.327052] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:04.785 [2024-12-11 09:53:14.327067] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:04.785 [2024-12-11 09:53:14.327073] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:04.785 [2024-12-11 09:53:14.328726] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.785 passed 00:15:05.043 Test: admin_identify_ctrlr_verify_fused ...[2024-12-11 09:53:14.407274] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.043 [2024-12-11 09:53:14.410303] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.043 passed 00:15:05.043 Test: admin_identify_ns ...[2024-12-11 09:53:14.494229] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.043 [2024-12-11 09:53:14.555227] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:05.043 [2024-12-11 09:53:14.563227] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:05.043 [2024-12-11 09:53:14.584308] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.043 passed 00:15:05.302 Test: admin_get_features_mandatory_features ...[2024-12-11 09:53:14.660155] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.302 [2024-12-11 09:53:14.663181] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.302 passed 00:15:05.302 Test: admin_get_features_optional_features ...[2024-12-11 09:53:14.738669] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.302 [2024-12-11 09:53:14.741689] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.302 passed 00:15:05.302 Test: admin_set_features_number_of_queues ...[2024-12-11 09:53:14.821941] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.560 [2024-12-11 09:53:14.922315] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.560 passed 00:15:05.560 Test: admin_get_log_page_mandatory_logs ...[2024-12-11 09:53:14.997973] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.560 [2024-12-11 09:53:15.000998] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.560 passed 00:15:05.560 Test: admin_get_log_page_with_lpo ...[2024-12-11 09:53:15.076428] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.819 [2024-12-11 09:53:15.148233] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:05.819 [2024-12-11 09:53:15.161303] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.819 passed 00:15:05.819 Test: fabric_property_get ...[2024-12-11 09:53:15.232851] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.819 [2024-12-11 09:53:15.234080] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:05.819 [2024-12-11 09:53:15.237872] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.819 passed 00:15:05.819 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-11 09:53:15.313381] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.819 [2024-12-11 09:53:15.314617] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:05.819 [2024-12-11 09:53:15.316400] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.819 passed 00:15:05.819 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-11 09:53:15.392403] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:06.077 [2024-12-11 09:53:15.480233] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:06.077 [2024-12-11 09:53:15.496223] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:06.077 [2024-12-11 09:53:15.501312] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:06.077 passed 00:15:06.077 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-11 09:53:15.575126] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:06.077 [2024-12-11 09:53:15.576371] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:06.077 [2024-12-11 09:53:15.580151] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:06.077 passed 00:15:06.336 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-11 09:53:15.658573] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:06.336 [2024-12-11 09:53:15.732236] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:06.336 [2024-12-11 09:53:15.756226] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:06.336 [2024-12-11 09:53:15.761303] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:06.336 passed 00:15:06.336 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-11 09:53:15.836898] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:06.336 [2024-12-11 09:53:15.838131] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:06.336 [2024-12-11 09:53:15.838155] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:06.336 [2024-12-11 09:53:15.839917] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:06.336 passed 00:15:06.595 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-11 09:53:15.920534] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:06.595 [2024-12-11 09:53:16.016228] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:06.595 [2024-12-11 09:53:16.024222] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:06.595 [2024-12-11 09:53:16.032228] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:06.595 [2024-12-11 09:53:16.040223] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:06.595 [2024-12-11 09:53:16.069313] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:06.595 passed 00:15:06.595 Test: admin_create_io_sq_verify_pc ...[2024-12-11 09:53:16.144841] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:06.595 [2024-12-11 09:53:16.161231] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:06.853 [2024-12-11 09:53:16.179018] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:06.853 passed 00:15:06.853 Test: admin_create_io_qp_max_qps ...[2024-12-11 09:53:16.258564] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:07.789 [2024-12-11 09:53:17.361232] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:08.356 [2024-12-11 09:53:17.739141] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.356 passed 00:15:08.356 Test: admin_create_io_sq_shared_cq ...[2024-12-11 09:53:17.816501] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.615 [2024-12-11 09:53:17.952225] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:08.615 [2024-12-11 09:53:17.989290] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.615 passed 00:15:08.615 00:15:08.615 Run Summary: Type Total Ran Passed Failed Inactive 00:15:08.615 suites 1 1 n/a 0 0 00:15:08.615 tests 18 18 18 0 0 00:15:08.615 asserts 360 360 360 0 n/a 00:15:08.615 00:15:08.615 Elapsed time = 1.503 seconds 00:15:08.615 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 49268 00:15:08.615 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 49268 ']' 00:15:08.615 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 49268 00:15:08.615 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:08.615 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.615 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 49268 00:15:08.615 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:08.615 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:08.615 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 49268' 00:15:08.615 killing process with pid 49268 00:15:08.615 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 49268 00:15:08.615 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 49268 00:15:08.874 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:08.874 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:08.874 00:15:08.874 real 0m5.657s 00:15:08.874 user 0m15.794s 00:15:08.874 sys 0m0.516s 00:15:08.874 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:08.874 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:08.874 ************************************ 00:15:08.874 END TEST nvmf_vfio_user_nvme_compliance 00:15:08.874 ************************************ 00:15:08.874 09:53:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:08.874 09:53:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:08.874 09:53:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.874 09:53:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:08.874 ************************************ 00:15:08.874 START TEST nvmf_vfio_user_fuzz 00:15:08.874 ************************************ 00:15:08.874 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:08.874 * Looking for test storage... 00:15:08.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:08.874 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:08.874 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:15:08.874 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:09.188 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:09.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.189 --rc genhtml_branch_coverage=1 00:15:09.189 --rc genhtml_function_coverage=1 00:15:09.189 --rc genhtml_legend=1 00:15:09.189 --rc geninfo_all_blocks=1 00:15:09.189 --rc geninfo_unexecuted_blocks=1 00:15:09.189 00:15:09.189 ' 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:09.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.189 --rc genhtml_branch_coverage=1 00:15:09.189 --rc genhtml_function_coverage=1 00:15:09.189 --rc genhtml_legend=1 00:15:09.189 --rc geninfo_all_blocks=1 00:15:09.189 --rc geninfo_unexecuted_blocks=1 00:15:09.189 00:15:09.189 ' 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:09.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.189 --rc genhtml_branch_coverage=1 00:15:09.189 --rc genhtml_function_coverage=1 00:15:09.189 --rc genhtml_legend=1 00:15:09.189 --rc geninfo_all_blocks=1 00:15:09.189 --rc geninfo_unexecuted_blocks=1 00:15:09.189 00:15:09.189 ' 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:09.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.189 --rc genhtml_branch_coverage=1 00:15:09.189 --rc genhtml_function_coverage=1 00:15:09.189 --rc genhtml_legend=1 00:15:09.189 --rc geninfo_all_blocks=1 00:15:09.189 --rc geninfo_unexecuted_blocks=1 00:15:09.189 00:15:09.189 ' 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:09.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=50245 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 50245' 00:15:09.189 Process pid: 50245 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 50245 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 50245 ']' 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.189 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:10.207 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.207 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:10.207 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:11.143 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:11.143 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:11.144 malloc0 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:11.144 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:43.230 Fuzzing completed. Shutting down the fuzz application 00:15:43.230 00:15:43.230 Dumping successful admin opcodes: 00:15:43.230 9, 10, 00:15:43.230 Dumping successful io opcodes: 00:15:43.230 0, 00:15:43.230 NS: 0x20000081ef00 I/O qp, Total commands completed: 1149413, total successful commands: 4525, random_seed: 436812160 00:15:43.230 NS: 0x20000081ef00 admin qp, Total commands completed: 284544, total successful commands: 67, random_seed: 1247812992 00:15:43.230 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:43.230 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.230 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:43.230 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.230 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 50245 00:15:43.230 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 50245 ']' 00:15:43.230 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 50245 00:15:43.230 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:43.230 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:43.230 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 50245 00:15:43.230 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:43.230 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:43.230 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 50245' 00:15:43.230 killing process with pid 50245 00:15:43.230 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 50245 00:15:43.230 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 50245 00:15:43.230 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:43.230 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:43.230 00:15:43.230 real 0m32.867s 00:15:43.230 user 0m34.547s 00:15:43.230 sys 0m27.400s 00:15:43.230 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.230 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:43.230 ************************************ 00:15:43.230 END TEST nvmf_vfio_user_fuzz 00:15:43.230 ************************************ 00:15:43.230 09:53:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:43.230 09:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:43.230 09:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.230 09:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:43.230 ************************************ 00:15:43.230 START TEST nvmf_auth_target 00:15:43.230 ************************************ 00:15:43.230 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:43.230 * Looking for test storage... 00:15:43.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:43.230 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:43.230 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:43.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.231 --rc genhtml_branch_coverage=1 00:15:43.231 --rc genhtml_function_coverage=1 00:15:43.231 --rc genhtml_legend=1 00:15:43.231 --rc geninfo_all_blocks=1 00:15:43.231 --rc geninfo_unexecuted_blocks=1 00:15:43.231 00:15:43.231 ' 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:43.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.231 --rc genhtml_branch_coverage=1 00:15:43.231 --rc genhtml_function_coverage=1 00:15:43.231 --rc genhtml_legend=1 00:15:43.231 --rc geninfo_all_blocks=1 00:15:43.231 --rc geninfo_unexecuted_blocks=1 00:15:43.231 00:15:43.231 ' 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:43.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.231 --rc genhtml_branch_coverage=1 00:15:43.231 --rc genhtml_function_coverage=1 00:15:43.231 --rc genhtml_legend=1 00:15:43.231 --rc geninfo_all_blocks=1 00:15:43.231 --rc geninfo_unexecuted_blocks=1 00:15:43.231 00:15:43.231 ' 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:43.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.231 --rc genhtml_branch_coverage=1 00:15:43.231 --rc genhtml_function_coverage=1 00:15:43.231 --rc genhtml_legend=1 00:15:43.231 --rc geninfo_all_blocks=1 00:15:43.231 --rc geninfo_unexecuted_blocks=1 00:15:43.231 00:15:43.231 ' 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:43.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:43.231 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:43.232 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:43.232 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:43.232 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:43.232 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:43.232 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.232 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:43.232 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:43.232 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:43.232 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.232 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:43.232 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.232 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:43.232 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:43.232 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:43.232 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:48.505 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:48.505 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:48.506 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:48.506 Found net devices under 0000:af:00.0: cvl_0_0 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:48.506 Found net devices under 0000:af:00.1: cvl_0_1 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:48.506 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:48.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:15:48.765 00:15:48.765 --- 10.0.0.2 ping statistics --- 00:15:48.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.765 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:48.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:15:48.765 00:15:48.765 --- 10.0.0.1 ping statistics --- 00:15:48.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.765 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:48.765 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:48.766 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.766 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=58995 00:15:48.766 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 58995 00:15:48.766 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:48.766 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 58995 ']' 00:15:48.766 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.766 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.766 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.766 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.766 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=59199 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4e07cfc5c03a6b1efdf2de722137da30d9dc557591480cc1 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.RzP 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4e07cfc5c03a6b1efdf2de722137da30d9dc557591480cc1 0 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4e07cfc5c03a6b1efdf2de722137da30d9dc557591480cc1 0 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4e07cfc5c03a6b1efdf2de722137da30d9dc557591480cc1 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:49.024 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.RzP 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.RzP 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.RzP 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=eb643c45e05a718a428203fd0cb5b954b39f0f22cd7bdb1a04d0e3bb647dd660 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.AUN 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key eb643c45e05a718a428203fd0cb5b954b39f0f22cd7bdb1a04d0e3bb647dd660 3 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 eb643c45e05a718a428203fd0cb5b954b39f0f22cd7bdb1a04d0e3bb647dd660 3 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=eb643c45e05a718a428203fd0cb5b954b39f0f22cd7bdb1a04d0e3bb647dd660 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.AUN 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.AUN 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.AUN 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e58815861a5be2a1f44aa35a7ee58bd2 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.NRn 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e58815861a5be2a1f44aa35a7ee58bd2 1 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e58815861a5be2a1f44aa35a7ee58bd2 1 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e58815861a5be2a1f44aa35a7ee58bd2 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.NRn 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.NRn 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.NRn 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=390c718ef9895ab57b579992073208e32fc0735d53774277 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4lM 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 390c718ef9895ab57b579992073208e32fc0735d53774277 2 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 390c718ef9895ab57b579992073208e32fc0735d53774277 2 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=390c718ef9895ab57b579992073208e32fc0735d53774277 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4lM 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4lM 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.4lM 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c3cb5273cad90c7318c8b80021f9f9c2d63ec8a6f34e5cf8 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.OIp 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c3cb5273cad90c7318c8b80021f9f9c2d63ec8a6f34e5cf8 2 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c3cb5273cad90c7318c8b80021f9f9c2d63ec8a6f34e5cf8 2 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:49.284 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:49.285 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c3cb5273cad90c7318c8b80021f9f9c2d63ec8a6f34e5cf8 00:15:49.285 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:49.285 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:49.285 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.OIp 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.OIp 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.OIp 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=14825d318ef32d1b7ff15914f9686262 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.f3V 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 14825d318ef32d1b7ff15914f9686262 1 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 14825d318ef32d1b7ff15914f9686262 1 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=14825d318ef32d1b7ff15914f9686262 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.f3V 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.f3V 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.f3V 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ef7b363546c1cffc50127187960813d9503e1d5deb47b057383311c180078800 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.mMe 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ef7b363546c1cffc50127187960813d9503e1d5deb47b057383311c180078800 3 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ef7b363546c1cffc50127187960813d9503e1d5deb47b057383311c180078800 3 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ef7b363546c1cffc50127187960813d9503e1d5deb47b057383311c180078800 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.mMe 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.mMe 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.mMe 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 58995 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 58995 ']' 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.544 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.803 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.803 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:49.803 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 59199 /var/tmp/host.sock 00:15:49.803 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 59199 ']' 00:15:49.803 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:49.803 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.803 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:49.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:49.803 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.803 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RzP 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.RzP 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.RzP 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.AUN ]] 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AUN 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AUN 00:15:50.062 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AUN 00:15:50.321 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:50.321 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.NRn 00:15:50.321 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.321 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.321 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.321 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.NRn 00:15:50.321 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.NRn 00:15:50.579 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.4lM ]] 00:15:50.579 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4lM 00:15:50.579 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.579 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.579 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.579 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4lM 00:15:50.579 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4lM 00:15:50.837 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:50.837 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.OIp 00:15:50.837 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.837 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.837 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.837 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.OIp 00:15:50.837 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.OIp 00:15:51.096 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.f3V ]] 00:15:51.096 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.f3V 00:15:51.096 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.096 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.096 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.096 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.f3V 00:15:51.096 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.f3V 00:15:51.096 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:51.096 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.mMe 00:15:51.096 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.096 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.096 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.096 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.mMe 00:15:51.096 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.mMe 00:15:51.355 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:51.355 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:51.355 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.355 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.355 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:51.355 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:51.614 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:51.614 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.614 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:51.614 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:51.614 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:51.614 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.614 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.614 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.614 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.614 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.614 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.614 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.614 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.873 00:15:51.873 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.873 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.873 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.131 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.131 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.131 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.131 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.131 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.131 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.131 { 00:15:52.131 "cntlid": 1, 00:15:52.131 "qid": 0, 00:15:52.131 "state": "enabled", 00:15:52.131 "thread": "nvmf_tgt_poll_group_000", 00:15:52.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:52.131 "listen_address": { 00:15:52.131 "trtype": "TCP", 00:15:52.131 "adrfam": "IPv4", 00:15:52.131 "traddr": "10.0.0.2", 00:15:52.131 "trsvcid": "4420" 00:15:52.131 }, 00:15:52.131 "peer_address": { 00:15:52.131 "trtype": "TCP", 00:15:52.131 "adrfam": "IPv4", 00:15:52.131 "traddr": "10.0.0.1", 00:15:52.131 "trsvcid": "56158" 00:15:52.131 }, 00:15:52.131 "auth": { 00:15:52.131 "state": "completed", 00:15:52.131 "digest": "sha256", 00:15:52.131 "dhgroup": "null" 00:15:52.131 } 00:15:52.131 } 00:15:52.131 ]' 00:15:52.131 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.131 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.131 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.131 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:52.131 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.131 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.131 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.131 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.390 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:15:52.390 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:15:52.958 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.958 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:52.958 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.958 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.958 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.958 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.958 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:52.958 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:53.216 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:53.217 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.217 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:53.217 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:53.217 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:53.217 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.217 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.217 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.217 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.217 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.217 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.217 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.217 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.475 00:15:53.475 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.475 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.475 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.734 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.734 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.734 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.734 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.734 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.734 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.734 { 00:15:53.734 "cntlid": 3, 00:15:53.734 "qid": 0, 00:15:53.734 "state": "enabled", 00:15:53.734 "thread": "nvmf_tgt_poll_group_000", 00:15:53.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:53.734 "listen_address": { 00:15:53.734 "trtype": "TCP", 00:15:53.734 "adrfam": "IPv4", 00:15:53.734 "traddr": "10.0.0.2", 00:15:53.734 "trsvcid": "4420" 00:15:53.734 }, 00:15:53.734 "peer_address": { 00:15:53.734 "trtype": "TCP", 00:15:53.734 "adrfam": "IPv4", 00:15:53.734 "traddr": "10.0.0.1", 00:15:53.734 "trsvcid": "56184" 00:15:53.734 }, 00:15:53.734 "auth": { 00:15:53.734 "state": "completed", 00:15:53.734 "digest": "sha256", 00:15:53.734 "dhgroup": "null" 00:15:53.734 } 00:15:53.734 } 00:15:53.734 ]' 00:15:53.734 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.734 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.734 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.734 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:53.734 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.734 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.734 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.734 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.993 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:15:53.993 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:15:54.560 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.560 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:54.560 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.560 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.560 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.560 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.560 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:54.560 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:54.819 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:54.819 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.819 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:54.819 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:54.819 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:54.819 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.819 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.819 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.819 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.819 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.819 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.819 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.819 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.077 00:15:55.077 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.077 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.077 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.336 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.336 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.336 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.336 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.336 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.336 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.336 { 00:15:55.336 "cntlid": 5, 00:15:55.336 "qid": 0, 00:15:55.336 "state": "enabled", 00:15:55.336 "thread": "nvmf_tgt_poll_group_000", 00:15:55.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:55.336 "listen_address": { 00:15:55.336 "trtype": "TCP", 00:15:55.336 "adrfam": "IPv4", 00:15:55.336 "traddr": "10.0.0.2", 00:15:55.336 "trsvcid": "4420" 00:15:55.336 }, 00:15:55.336 "peer_address": { 00:15:55.336 "trtype": "TCP", 00:15:55.336 "adrfam": "IPv4", 00:15:55.336 "traddr": "10.0.0.1", 00:15:55.336 "trsvcid": "44956" 00:15:55.336 }, 00:15:55.336 "auth": { 00:15:55.336 "state": "completed", 00:15:55.336 "digest": "sha256", 00:15:55.336 "dhgroup": "null" 00:15:55.336 } 00:15:55.336 } 00:15:55.336 ]' 00:15:55.336 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.336 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.336 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.336 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:55.336 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.336 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.336 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.336 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.595 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:15:55.595 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:15:56.160 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.160 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:56.160 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.160 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.160 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.160 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.161 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:56.161 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:56.420 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:56.420 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.420 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:56.420 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:56.420 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:56.420 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.420 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:15:56.420 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.420 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.420 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.420 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:56.420 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.420 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.678 00:15:56.678 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.678 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.678 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.937 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.937 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.937 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.937 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.937 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.937 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.937 { 00:15:56.937 "cntlid": 7, 00:15:56.937 "qid": 0, 00:15:56.937 "state": "enabled", 00:15:56.937 "thread": "nvmf_tgt_poll_group_000", 00:15:56.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:56.937 "listen_address": { 00:15:56.937 "trtype": "TCP", 00:15:56.937 "adrfam": "IPv4", 00:15:56.937 "traddr": "10.0.0.2", 00:15:56.937 "trsvcid": "4420" 00:15:56.937 }, 00:15:56.937 "peer_address": { 00:15:56.937 "trtype": "TCP", 00:15:56.937 "adrfam": "IPv4", 00:15:56.937 "traddr": "10.0.0.1", 00:15:56.937 "trsvcid": "44988" 00:15:56.937 }, 00:15:56.937 "auth": { 00:15:56.937 "state": "completed", 00:15:56.937 "digest": "sha256", 00:15:56.937 "dhgroup": "null" 00:15:56.937 } 00:15:56.937 } 00:15:56.937 ]' 00:15:56.937 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.937 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.937 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.937 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:56.937 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.937 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.937 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.937 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.196 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:15:57.196 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:15:57.763 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.763 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:57.763 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.763 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.763 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.763 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.763 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.763 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:57.763 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:58.022 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:58.022 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.022 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:58.022 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:58.022 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:58.022 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.022 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.022 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.022 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.022 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.022 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.022 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.022 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.281 00:15:58.281 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.281 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.281 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.539 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.539 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.539 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.539 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.539 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.539 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.539 { 00:15:58.539 "cntlid": 9, 00:15:58.539 "qid": 0, 00:15:58.539 "state": "enabled", 00:15:58.539 "thread": "nvmf_tgt_poll_group_000", 00:15:58.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:58.539 "listen_address": { 00:15:58.539 "trtype": "TCP", 00:15:58.539 "adrfam": "IPv4", 00:15:58.539 "traddr": "10.0.0.2", 00:15:58.539 "trsvcid": "4420" 00:15:58.539 }, 00:15:58.539 "peer_address": { 00:15:58.539 "trtype": "TCP", 00:15:58.539 "adrfam": "IPv4", 00:15:58.539 "traddr": "10.0.0.1", 00:15:58.539 "trsvcid": "45014" 00:15:58.539 }, 00:15:58.539 "auth": { 00:15:58.539 "state": "completed", 00:15:58.539 "digest": "sha256", 00:15:58.539 "dhgroup": "ffdhe2048" 00:15:58.539 } 00:15:58.539 } 00:15:58.539 ]' 00:15:58.539 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.539 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.539 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.539 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.539 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.539 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.539 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.539 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.798 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:15:58.798 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:15:59.365 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.365 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:59.365 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.365 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.365 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.365 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.365 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:59.365 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:59.624 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:59.624 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.624 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:59.624 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:59.624 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:59.624 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.624 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.624 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.624 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.624 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.624 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.624 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.624 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.883 00:15:59.884 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.884 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.884 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.143 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.143 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.143 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.143 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.143 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.143 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.143 { 00:16:00.143 "cntlid": 11, 00:16:00.143 "qid": 0, 00:16:00.143 "state": "enabled", 00:16:00.143 "thread": "nvmf_tgt_poll_group_000", 00:16:00.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:00.143 "listen_address": { 00:16:00.143 "trtype": "TCP", 00:16:00.143 "adrfam": "IPv4", 00:16:00.143 "traddr": "10.0.0.2", 00:16:00.143 "trsvcid": "4420" 00:16:00.143 }, 00:16:00.143 "peer_address": { 00:16:00.143 "trtype": "TCP", 00:16:00.143 "adrfam": "IPv4", 00:16:00.143 "traddr": "10.0.0.1", 00:16:00.143 "trsvcid": "45054" 00:16:00.143 }, 00:16:00.143 "auth": { 00:16:00.143 "state": "completed", 00:16:00.143 "digest": "sha256", 00:16:00.143 "dhgroup": "ffdhe2048" 00:16:00.143 } 00:16:00.143 } 00:16:00.143 ]' 00:16:00.143 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.143 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.143 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.143 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:00.143 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.143 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.143 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.143 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.401 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:00.401 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:00.968 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.968 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:00.968 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.968 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.968 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.968 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.968 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:00.968 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:01.226 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:01.226 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.226 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:01.226 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:01.226 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:01.226 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.226 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.226 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.226 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.226 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.226 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.226 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.226 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.485 00:16:01.485 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.485 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.485 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.485 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.485 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.485 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.485 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.485 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.485 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.485 { 00:16:01.485 "cntlid": 13, 00:16:01.485 "qid": 0, 00:16:01.485 "state": "enabled", 00:16:01.485 "thread": "nvmf_tgt_poll_group_000", 00:16:01.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:01.485 "listen_address": { 00:16:01.485 "trtype": "TCP", 00:16:01.485 "adrfam": "IPv4", 00:16:01.485 "traddr": "10.0.0.2", 00:16:01.485 "trsvcid": "4420" 00:16:01.485 }, 00:16:01.485 "peer_address": { 00:16:01.485 "trtype": "TCP", 00:16:01.485 "adrfam": "IPv4", 00:16:01.485 "traddr": "10.0.0.1", 00:16:01.485 "trsvcid": "45098" 00:16:01.485 }, 00:16:01.485 "auth": { 00:16:01.485 "state": "completed", 00:16:01.485 "digest": "sha256", 00:16:01.485 "dhgroup": "ffdhe2048" 00:16:01.485 } 00:16:01.485 } 00:16:01.485 ]' 00:16:01.485 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.743 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.743 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.743 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:01.743 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.743 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.744 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.744 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.002 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:16:02.002 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:16:02.570 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.570 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:02.570 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.570 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.570 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.570 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.570 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:02.570 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:02.829 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:02.829 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.829 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:02.829 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:02.829 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:02.829 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.829 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:02.829 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.829 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.829 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.829 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:02.829 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.829 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.829 00:16:03.088 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.088 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.088 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.088 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.088 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.088 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.088 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.088 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.088 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.088 { 00:16:03.088 "cntlid": 15, 00:16:03.088 "qid": 0, 00:16:03.088 "state": "enabled", 00:16:03.088 "thread": "nvmf_tgt_poll_group_000", 00:16:03.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:03.088 "listen_address": { 00:16:03.088 "trtype": "TCP", 00:16:03.088 "adrfam": "IPv4", 00:16:03.088 "traddr": "10.0.0.2", 00:16:03.088 "trsvcid": "4420" 00:16:03.088 }, 00:16:03.088 "peer_address": { 00:16:03.088 "trtype": "TCP", 00:16:03.088 "adrfam": "IPv4", 00:16:03.088 "traddr": "10.0.0.1", 00:16:03.088 "trsvcid": "45134" 00:16:03.088 }, 00:16:03.088 "auth": { 00:16:03.088 "state": "completed", 00:16:03.088 "digest": "sha256", 00:16:03.088 "dhgroup": "ffdhe2048" 00:16:03.088 } 00:16:03.088 } 00:16:03.088 ]' 00:16:03.088 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.088 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.088 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.346 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:03.346 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.346 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.346 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.346 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.605 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:16:03.605 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.172 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.430 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.430 00:16:04.689 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.689 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.689 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.689 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.689 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.689 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.689 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.689 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.689 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.689 { 00:16:04.689 "cntlid": 17, 00:16:04.689 "qid": 0, 00:16:04.689 "state": "enabled", 00:16:04.689 "thread": "nvmf_tgt_poll_group_000", 00:16:04.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:04.689 "listen_address": { 00:16:04.689 "trtype": "TCP", 00:16:04.689 "adrfam": "IPv4", 00:16:04.689 "traddr": "10.0.0.2", 00:16:04.689 "trsvcid": "4420" 00:16:04.689 }, 00:16:04.689 "peer_address": { 00:16:04.689 "trtype": "TCP", 00:16:04.689 "adrfam": "IPv4", 00:16:04.689 "traddr": "10.0.0.1", 00:16:04.689 "trsvcid": "45156" 00:16:04.689 }, 00:16:04.689 "auth": { 00:16:04.689 "state": "completed", 00:16:04.689 "digest": "sha256", 00:16:04.689 "dhgroup": "ffdhe3072" 00:16:04.689 } 00:16:04.689 } 00:16:04.689 ]' 00:16:04.689 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.948 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.948 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.948 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:04.948 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.948 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.948 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.948 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.207 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:16:05.207 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.773 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.774 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.774 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.032 00:16:06.032 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.032 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.032 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.290 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.290 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.290 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.290 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.290 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.290 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.290 { 00:16:06.290 "cntlid": 19, 00:16:06.290 "qid": 0, 00:16:06.290 "state": "enabled", 00:16:06.290 "thread": "nvmf_tgt_poll_group_000", 00:16:06.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:06.290 "listen_address": { 00:16:06.290 "trtype": "TCP", 00:16:06.290 "adrfam": "IPv4", 00:16:06.291 "traddr": "10.0.0.2", 00:16:06.291 "trsvcid": "4420" 00:16:06.291 }, 00:16:06.291 "peer_address": { 00:16:06.291 "trtype": "TCP", 00:16:06.291 "adrfam": "IPv4", 00:16:06.291 "traddr": "10.0.0.1", 00:16:06.291 "trsvcid": "50502" 00:16:06.291 }, 00:16:06.291 "auth": { 00:16:06.291 "state": "completed", 00:16:06.291 "digest": "sha256", 00:16:06.291 "dhgroup": "ffdhe3072" 00:16:06.291 } 00:16:06.291 } 00:16:06.291 ]' 00:16:06.291 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.291 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.291 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.551 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:06.551 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.551 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.551 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.551 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.809 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:06.809 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:07.375 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.375 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:07.375 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.375 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.375 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.375 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.375 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:07.375 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:07.375 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:07.375 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.375 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:07.375 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:07.375 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:07.375 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.375 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.375 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.375 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.376 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.376 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.376 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.376 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.635 00:16:07.894 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.894 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.894 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.894 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.894 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.894 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.894 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.894 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.894 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.894 { 00:16:07.894 "cntlid": 21, 00:16:07.894 "qid": 0, 00:16:07.894 "state": "enabled", 00:16:07.894 "thread": "nvmf_tgt_poll_group_000", 00:16:07.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:07.894 "listen_address": { 00:16:07.894 "trtype": "TCP", 00:16:07.894 "adrfam": "IPv4", 00:16:07.894 "traddr": "10.0.0.2", 00:16:07.894 "trsvcid": "4420" 00:16:07.894 }, 00:16:07.894 "peer_address": { 00:16:07.894 "trtype": "TCP", 00:16:07.894 "adrfam": "IPv4", 00:16:07.894 "traddr": "10.0.0.1", 00:16:07.894 "trsvcid": "50522" 00:16:07.894 }, 00:16:07.894 "auth": { 00:16:07.894 "state": "completed", 00:16:07.894 "digest": "sha256", 00:16:07.894 "dhgroup": "ffdhe3072" 00:16:07.894 } 00:16:07.894 } 00:16:07.894 ]' 00:16:07.894 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.153 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.153 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.153 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:08.153 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.153 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.153 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.153 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.411 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:16:08.411 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.978 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.236 00:16:09.236 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.236 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.236 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.494 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.494 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.494 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.494 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.494 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.494 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.494 { 00:16:09.494 "cntlid": 23, 00:16:09.494 "qid": 0, 00:16:09.494 "state": "enabled", 00:16:09.494 "thread": "nvmf_tgt_poll_group_000", 00:16:09.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:09.494 "listen_address": { 00:16:09.494 "trtype": "TCP", 00:16:09.494 "adrfam": "IPv4", 00:16:09.494 "traddr": "10.0.0.2", 00:16:09.494 "trsvcid": "4420" 00:16:09.494 }, 00:16:09.494 "peer_address": { 00:16:09.494 "trtype": "TCP", 00:16:09.494 "adrfam": "IPv4", 00:16:09.494 "traddr": "10.0.0.1", 00:16:09.494 "trsvcid": "50544" 00:16:09.494 }, 00:16:09.494 "auth": { 00:16:09.494 "state": "completed", 00:16:09.494 "digest": "sha256", 00:16:09.494 "dhgroup": "ffdhe3072" 00:16:09.494 } 00:16:09.494 } 00:16:09.494 ]' 00:16:09.495 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.495 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.495 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.753 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:09.753 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.753 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.753 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.753 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.011 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:16:10.011 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:16:10.577 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.577 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:10.577 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.577 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.577 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.577 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.577 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.578 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.578 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.578 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:10.578 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.578 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.578 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:10.578 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:10.578 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.578 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.578 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.578 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.836 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.836 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.836 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.836 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.094 00:16:11.094 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.094 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.094 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.094 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.094 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.094 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.094 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.094 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.094 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.094 { 00:16:11.094 "cntlid": 25, 00:16:11.094 "qid": 0, 00:16:11.094 "state": "enabled", 00:16:11.094 "thread": "nvmf_tgt_poll_group_000", 00:16:11.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:11.094 "listen_address": { 00:16:11.094 "trtype": "TCP", 00:16:11.094 "adrfam": "IPv4", 00:16:11.094 "traddr": "10.0.0.2", 00:16:11.094 "trsvcid": "4420" 00:16:11.094 }, 00:16:11.094 "peer_address": { 00:16:11.094 "trtype": "TCP", 00:16:11.094 "adrfam": "IPv4", 00:16:11.094 "traddr": "10.0.0.1", 00:16:11.094 "trsvcid": "50588" 00:16:11.095 }, 00:16:11.095 "auth": { 00:16:11.095 "state": "completed", 00:16:11.095 "digest": "sha256", 00:16:11.095 "dhgroup": "ffdhe4096" 00:16:11.095 } 00:16:11.095 } 00:16:11.095 ]' 00:16:11.095 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.354 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.354 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.354 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:11.354 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.354 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.354 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.354 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.612 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:16:11.612 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.180 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.439 00:16:12.698 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.698 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.698 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.698 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.698 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.698 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.698 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.698 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.698 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.698 { 00:16:12.698 "cntlid": 27, 00:16:12.698 "qid": 0, 00:16:12.698 "state": "enabled", 00:16:12.698 "thread": "nvmf_tgt_poll_group_000", 00:16:12.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:12.698 "listen_address": { 00:16:12.698 "trtype": "TCP", 00:16:12.698 "adrfam": "IPv4", 00:16:12.698 "traddr": "10.0.0.2", 00:16:12.698 "trsvcid": "4420" 00:16:12.698 }, 00:16:12.698 "peer_address": { 00:16:12.698 "trtype": "TCP", 00:16:12.698 "adrfam": "IPv4", 00:16:12.698 "traddr": "10.0.0.1", 00:16:12.698 "trsvcid": "50614" 00:16:12.698 }, 00:16:12.698 "auth": { 00:16:12.698 "state": "completed", 00:16:12.698 "digest": "sha256", 00:16:12.698 "dhgroup": "ffdhe4096" 00:16:12.698 } 00:16:12.698 } 00:16:12.698 ]' 00:16:12.698 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.957 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.957 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.957 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:12.957 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.957 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.957 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.957 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.215 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:13.215 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.783 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.041 00:16:14.300 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.300 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.300 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.300 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.300 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.300 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.300 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.300 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.300 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.300 { 00:16:14.300 "cntlid": 29, 00:16:14.300 "qid": 0, 00:16:14.300 "state": "enabled", 00:16:14.300 "thread": "nvmf_tgt_poll_group_000", 00:16:14.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:14.300 "listen_address": { 00:16:14.300 "trtype": "TCP", 00:16:14.300 "adrfam": "IPv4", 00:16:14.300 "traddr": "10.0.0.2", 00:16:14.300 "trsvcid": "4420" 00:16:14.300 }, 00:16:14.300 "peer_address": { 00:16:14.300 "trtype": "TCP", 00:16:14.300 "adrfam": "IPv4", 00:16:14.300 "traddr": "10.0.0.1", 00:16:14.300 "trsvcid": "50624" 00:16:14.300 }, 00:16:14.300 "auth": { 00:16:14.300 "state": "completed", 00:16:14.300 "digest": "sha256", 00:16:14.300 "dhgroup": "ffdhe4096" 00:16:14.300 } 00:16:14.300 } 00:16:14.300 ]' 00:16:14.300 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.300 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.559 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.559 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:14.559 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.559 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.559 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.559 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.817 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:16:14.817 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:16:15.384 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.384 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:15.384 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.384 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.384 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.384 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.384 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:15.384 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:15.385 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:15.385 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.385 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.385 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:15.385 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:15.385 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.385 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:15.385 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.385 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.385 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.385 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:15.385 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.385 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.643 00:16:15.643 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.643 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.643 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.901 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.901 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.901 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.901 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.901 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.901 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.901 { 00:16:15.901 "cntlid": 31, 00:16:15.901 "qid": 0, 00:16:15.901 "state": "enabled", 00:16:15.901 "thread": "nvmf_tgt_poll_group_000", 00:16:15.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:15.901 "listen_address": { 00:16:15.901 "trtype": "TCP", 00:16:15.901 "adrfam": "IPv4", 00:16:15.901 "traddr": "10.0.0.2", 00:16:15.901 "trsvcid": "4420" 00:16:15.901 }, 00:16:15.901 "peer_address": { 00:16:15.901 "trtype": "TCP", 00:16:15.901 "adrfam": "IPv4", 00:16:15.901 "traddr": "10.0.0.1", 00:16:15.901 "trsvcid": "34824" 00:16:15.901 }, 00:16:15.901 "auth": { 00:16:15.901 "state": "completed", 00:16:15.901 "digest": "sha256", 00:16:15.901 "dhgroup": "ffdhe4096" 00:16:15.901 } 00:16:15.901 } 00:16:15.901 ]' 00:16:15.901 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.901 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.901 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.160 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:16.160 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.160 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.160 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.160 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.160 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:16:16.161 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:16:16.728 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.022 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.383 00:16:17.383 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.383 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.383 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.669 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.669 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.669 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.669 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.669 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.669 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.669 { 00:16:17.669 "cntlid": 33, 00:16:17.669 "qid": 0, 00:16:17.669 "state": "enabled", 00:16:17.669 "thread": "nvmf_tgt_poll_group_000", 00:16:17.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:17.669 "listen_address": { 00:16:17.669 "trtype": "TCP", 00:16:17.669 "adrfam": "IPv4", 00:16:17.669 "traddr": "10.0.0.2", 00:16:17.669 "trsvcid": "4420" 00:16:17.669 }, 00:16:17.669 "peer_address": { 00:16:17.669 "trtype": "TCP", 00:16:17.669 "adrfam": "IPv4", 00:16:17.669 "traddr": "10.0.0.1", 00:16:17.669 "trsvcid": "34844" 00:16:17.669 }, 00:16:17.669 "auth": { 00:16:17.669 "state": "completed", 00:16:17.669 "digest": "sha256", 00:16:17.669 "dhgroup": "ffdhe6144" 00:16:17.669 } 00:16:17.669 } 00:16:17.669 ]' 00:16:17.669 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.669 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.669 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.669 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:17.669 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.669 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.669 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.669 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.928 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:16:17.928 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:16:18.495 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.495 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:18.495 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.495 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.495 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.495 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.495 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:18.495 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:18.754 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:18.754 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.754 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.754 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:18.754 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:18.754 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.754 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.754 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.754 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.754 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.754 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.754 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.754 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.012 00:16:19.271 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.271 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.271 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.271 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.271 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.272 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.272 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.272 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.272 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.272 { 00:16:19.272 "cntlid": 35, 00:16:19.272 "qid": 0, 00:16:19.272 "state": "enabled", 00:16:19.272 "thread": "nvmf_tgt_poll_group_000", 00:16:19.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:19.272 "listen_address": { 00:16:19.272 "trtype": "TCP", 00:16:19.272 "adrfam": "IPv4", 00:16:19.272 "traddr": "10.0.0.2", 00:16:19.272 "trsvcid": "4420" 00:16:19.272 }, 00:16:19.272 "peer_address": { 00:16:19.272 "trtype": "TCP", 00:16:19.272 "adrfam": "IPv4", 00:16:19.272 "traddr": "10.0.0.1", 00:16:19.272 "trsvcid": "34868" 00:16:19.272 }, 00:16:19.272 "auth": { 00:16:19.272 "state": "completed", 00:16:19.272 "digest": "sha256", 00:16:19.272 "dhgroup": "ffdhe6144" 00:16:19.272 } 00:16:19.272 } 00:16:19.272 ]' 00:16:19.272 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.272 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.533 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.533 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:19.533 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.533 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.533 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.533 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.791 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:19.791 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:20.356 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.356 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:20.356 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.356 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.356 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.356 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.356 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:20.356 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:20.356 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:20.356 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.356 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.356 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:20.356 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:20.356 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.356 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.356 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.356 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.614 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.614 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.614 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.614 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.872 00:16:20.872 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.872 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.872 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.131 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.131 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.131 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.131 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.131 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.131 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.131 { 00:16:21.131 "cntlid": 37, 00:16:21.131 "qid": 0, 00:16:21.131 "state": "enabled", 00:16:21.131 "thread": "nvmf_tgt_poll_group_000", 00:16:21.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:21.131 "listen_address": { 00:16:21.131 "trtype": "TCP", 00:16:21.131 "adrfam": "IPv4", 00:16:21.131 "traddr": "10.0.0.2", 00:16:21.131 "trsvcid": "4420" 00:16:21.131 }, 00:16:21.131 "peer_address": { 00:16:21.131 "trtype": "TCP", 00:16:21.131 "adrfam": "IPv4", 00:16:21.131 "traddr": "10.0.0.1", 00:16:21.131 "trsvcid": "34890" 00:16:21.131 }, 00:16:21.131 "auth": { 00:16:21.131 "state": "completed", 00:16:21.131 "digest": "sha256", 00:16:21.131 "dhgroup": "ffdhe6144" 00:16:21.131 } 00:16:21.131 } 00:16:21.131 ]' 00:16:21.131 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.131 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.131 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.131 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:21.131 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.131 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.131 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.131 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.389 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:16:21.389 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:16:21.957 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.957 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:21.957 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.957 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.957 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.957 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.957 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:21.957 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:22.216 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:22.216 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.216 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.216 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:22.216 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:22.216 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.216 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:22.216 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.216 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.216 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.216 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:22.216 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.216 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.475 00:16:22.475 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.475 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.475 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.734 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.734 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.734 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.734 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.734 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.734 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.734 { 00:16:22.734 "cntlid": 39, 00:16:22.734 "qid": 0, 00:16:22.734 "state": "enabled", 00:16:22.734 "thread": "nvmf_tgt_poll_group_000", 00:16:22.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:22.734 "listen_address": { 00:16:22.734 "trtype": "TCP", 00:16:22.734 "adrfam": "IPv4", 00:16:22.734 "traddr": "10.0.0.2", 00:16:22.734 "trsvcid": "4420" 00:16:22.734 }, 00:16:22.734 "peer_address": { 00:16:22.734 "trtype": "TCP", 00:16:22.734 "adrfam": "IPv4", 00:16:22.734 "traddr": "10.0.0.1", 00:16:22.734 "trsvcid": "34918" 00:16:22.734 }, 00:16:22.734 "auth": { 00:16:22.734 "state": "completed", 00:16:22.734 "digest": "sha256", 00:16:22.734 "dhgroup": "ffdhe6144" 00:16:22.734 } 00:16:22.734 } 00:16:22.734 ]' 00:16:22.734 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.734 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.734 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.734 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:22.734 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.734 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.734 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.734 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.993 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:16:22.993 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:16:23.560 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.560 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:23.560 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.560 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.560 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.560 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.560 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.561 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:23.561 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:23.819 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:23.819 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.819 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.819 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:23.819 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:23.819 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.819 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.819 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.819 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.819 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.819 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.819 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.819 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.387 00:16:24.387 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.387 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.387 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.387 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.387 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.387 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.387 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.646 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.646 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.646 { 00:16:24.646 "cntlid": 41, 00:16:24.646 "qid": 0, 00:16:24.646 "state": "enabled", 00:16:24.646 "thread": "nvmf_tgt_poll_group_000", 00:16:24.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:24.646 "listen_address": { 00:16:24.646 "trtype": "TCP", 00:16:24.646 "adrfam": "IPv4", 00:16:24.646 "traddr": "10.0.0.2", 00:16:24.646 "trsvcid": "4420" 00:16:24.646 }, 00:16:24.646 "peer_address": { 00:16:24.646 "trtype": "TCP", 00:16:24.646 "adrfam": "IPv4", 00:16:24.646 "traddr": "10.0.0.1", 00:16:24.646 "trsvcid": "34952" 00:16:24.646 }, 00:16:24.646 "auth": { 00:16:24.646 "state": "completed", 00:16:24.646 "digest": "sha256", 00:16:24.646 "dhgroup": "ffdhe8192" 00:16:24.646 } 00:16:24.646 } 00:16:24.646 ]' 00:16:24.646 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.646 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.646 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.646 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:24.646 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.646 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.646 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.646 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.904 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:16:24.904 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:16:25.471 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.471 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:25.471 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.471 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.471 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.471 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.471 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:25.471 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:25.729 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:25.729 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.729 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.729 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:25.729 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:25.729 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.729 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.729 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.729 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.729 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.729 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.729 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.729 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.295 00:16:26.295 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.295 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.295 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.295 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.295 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.295 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.295 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.295 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.295 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.295 { 00:16:26.295 "cntlid": 43, 00:16:26.295 "qid": 0, 00:16:26.295 "state": "enabled", 00:16:26.295 "thread": "nvmf_tgt_poll_group_000", 00:16:26.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:26.295 "listen_address": { 00:16:26.295 "trtype": "TCP", 00:16:26.295 "adrfam": "IPv4", 00:16:26.295 "traddr": "10.0.0.2", 00:16:26.295 "trsvcid": "4420" 00:16:26.295 }, 00:16:26.295 "peer_address": { 00:16:26.295 "trtype": "TCP", 00:16:26.295 "adrfam": "IPv4", 00:16:26.295 "traddr": "10.0.0.1", 00:16:26.295 "trsvcid": "50186" 00:16:26.295 }, 00:16:26.295 "auth": { 00:16:26.295 "state": "completed", 00:16:26.295 "digest": "sha256", 00:16:26.295 "dhgroup": "ffdhe8192" 00:16:26.295 } 00:16:26.295 } 00:16:26.295 ]' 00:16:26.295 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.295 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.295 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.554 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:26.554 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.554 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.554 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.554 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.813 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:26.813 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.380 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.948 00:16:27.948 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.948 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.948 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.206 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.206 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.206 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.206 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.206 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.206 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.206 { 00:16:28.206 "cntlid": 45, 00:16:28.206 "qid": 0, 00:16:28.206 "state": "enabled", 00:16:28.206 "thread": "nvmf_tgt_poll_group_000", 00:16:28.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:28.206 "listen_address": { 00:16:28.206 "trtype": "TCP", 00:16:28.207 "adrfam": "IPv4", 00:16:28.207 "traddr": "10.0.0.2", 00:16:28.207 "trsvcid": "4420" 00:16:28.207 }, 00:16:28.207 "peer_address": { 00:16:28.207 "trtype": "TCP", 00:16:28.207 "adrfam": "IPv4", 00:16:28.207 "traddr": "10.0.0.1", 00:16:28.207 "trsvcid": "50192" 00:16:28.207 }, 00:16:28.207 "auth": { 00:16:28.207 "state": "completed", 00:16:28.207 "digest": "sha256", 00:16:28.207 "dhgroup": "ffdhe8192" 00:16:28.207 } 00:16:28.207 } 00:16:28.207 ]' 00:16:28.207 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.207 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.207 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.207 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:28.207 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.207 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.207 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.207 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.465 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:16:28.466 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:16:29.033 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.033 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:29.033 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.033 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.033 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.033 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.033 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:29.033 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:29.292 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:29.292 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.292 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.292 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:29.292 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:29.292 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.292 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:29.292 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.292 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.292 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.292 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:29.292 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.292 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.859 00:16:29.859 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.859 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.859 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.859 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.859 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.859 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.859 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.859 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.118 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.118 { 00:16:30.118 "cntlid": 47, 00:16:30.118 "qid": 0, 00:16:30.118 "state": "enabled", 00:16:30.118 "thread": "nvmf_tgt_poll_group_000", 00:16:30.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:30.118 "listen_address": { 00:16:30.118 "trtype": "TCP", 00:16:30.118 "adrfam": "IPv4", 00:16:30.118 "traddr": "10.0.0.2", 00:16:30.118 "trsvcid": "4420" 00:16:30.118 }, 00:16:30.118 "peer_address": { 00:16:30.118 "trtype": "TCP", 00:16:30.118 "adrfam": "IPv4", 00:16:30.118 "traddr": "10.0.0.1", 00:16:30.118 "trsvcid": "50208" 00:16:30.118 }, 00:16:30.118 "auth": { 00:16:30.118 "state": "completed", 00:16:30.118 "digest": "sha256", 00:16:30.118 "dhgroup": "ffdhe8192" 00:16:30.118 } 00:16:30.118 } 00:16:30.118 ]' 00:16:30.118 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.118 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.118 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.118 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:30.118 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.118 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.118 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.118 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.377 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:16:30.377 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:16:30.944 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.944 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:30.944 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.944 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.944 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.944 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:30.944 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.944 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.944 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:30.944 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:31.203 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:31.203 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.203 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.203 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:31.203 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:31.203 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.203 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.203 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.203 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.203 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.203 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.203 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.203 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.462 00:16:31.462 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.462 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.462 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.462 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.462 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.462 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.462 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.462 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.462 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.462 { 00:16:31.462 "cntlid": 49, 00:16:31.462 "qid": 0, 00:16:31.462 "state": "enabled", 00:16:31.462 "thread": "nvmf_tgt_poll_group_000", 00:16:31.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:31.462 "listen_address": { 00:16:31.462 "trtype": "TCP", 00:16:31.462 "adrfam": "IPv4", 00:16:31.462 "traddr": "10.0.0.2", 00:16:31.462 "trsvcid": "4420" 00:16:31.462 }, 00:16:31.462 "peer_address": { 00:16:31.462 "trtype": "TCP", 00:16:31.462 "adrfam": "IPv4", 00:16:31.462 "traddr": "10.0.0.1", 00:16:31.462 "trsvcid": "50220" 00:16:31.462 }, 00:16:31.462 "auth": { 00:16:31.462 "state": "completed", 00:16:31.462 "digest": "sha384", 00:16:31.462 "dhgroup": "null" 00:16:31.462 } 00:16:31.462 } 00:16:31.462 ]' 00:16:31.462 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.721 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.721 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.721 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:31.721 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.721 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.721 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.721 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.980 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:16:31.980 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:16:32.548 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.548 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:32.548 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.548 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.548 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.548 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.548 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:32.548 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:32.807 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:32.807 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.807 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:32.807 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:32.807 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:32.807 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.807 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.807 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.807 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.807 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.807 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.807 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.066 00:16:33.066 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.066 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.066 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.066 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.066 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.066 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.066 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.066 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.066 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.066 { 00:16:33.066 "cntlid": 51, 00:16:33.066 "qid": 0, 00:16:33.066 "state": "enabled", 00:16:33.066 "thread": "nvmf_tgt_poll_group_000", 00:16:33.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:33.066 "listen_address": { 00:16:33.066 "trtype": "TCP", 00:16:33.066 "adrfam": "IPv4", 00:16:33.066 "traddr": "10.0.0.2", 00:16:33.066 "trsvcid": "4420" 00:16:33.066 }, 00:16:33.066 "peer_address": { 00:16:33.066 "trtype": "TCP", 00:16:33.066 "adrfam": "IPv4", 00:16:33.066 "traddr": "10.0.0.1", 00:16:33.066 "trsvcid": "50254" 00:16:33.066 }, 00:16:33.066 "auth": { 00:16:33.066 "state": "completed", 00:16:33.066 "digest": "sha384", 00:16:33.066 "dhgroup": "null" 00:16:33.066 } 00:16:33.066 } 00:16:33.066 ]' 00:16:33.066 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.324 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.324 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.324 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:33.325 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.325 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.325 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.325 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.583 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:33.583 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.150 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.718 00:16:34.718 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.718 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.718 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.718 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.718 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.718 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.718 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.718 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.718 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.718 { 00:16:34.718 "cntlid": 53, 00:16:34.718 "qid": 0, 00:16:34.718 "state": "enabled", 00:16:34.718 "thread": "nvmf_tgt_poll_group_000", 00:16:34.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:34.718 "listen_address": { 00:16:34.718 "trtype": "TCP", 00:16:34.718 "adrfam": "IPv4", 00:16:34.718 "traddr": "10.0.0.2", 00:16:34.718 "trsvcid": "4420" 00:16:34.718 }, 00:16:34.718 "peer_address": { 00:16:34.718 "trtype": "TCP", 00:16:34.718 "adrfam": "IPv4", 00:16:34.718 "traddr": "10.0.0.1", 00:16:34.718 "trsvcid": "50274" 00:16:34.718 }, 00:16:34.718 "auth": { 00:16:34.718 "state": "completed", 00:16:34.718 "digest": "sha384", 00:16:34.718 "dhgroup": "null" 00:16:34.718 } 00:16:34.718 } 00:16:34.718 ]' 00:16:34.718 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.977 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.977 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.977 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:34.977 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.977 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.977 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.977 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.236 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:16:35.236 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.806 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.065 00:16:36.065 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.065 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.065 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.324 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.324 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.324 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.324 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.324 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.324 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.324 { 00:16:36.324 "cntlid": 55, 00:16:36.324 "qid": 0, 00:16:36.324 "state": "enabled", 00:16:36.324 "thread": "nvmf_tgt_poll_group_000", 00:16:36.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:36.324 "listen_address": { 00:16:36.324 "trtype": "TCP", 00:16:36.324 "adrfam": "IPv4", 00:16:36.324 "traddr": "10.0.0.2", 00:16:36.324 "trsvcid": "4420" 00:16:36.324 }, 00:16:36.324 "peer_address": { 00:16:36.324 "trtype": "TCP", 00:16:36.324 "adrfam": "IPv4", 00:16:36.324 "traddr": "10.0.0.1", 00:16:36.324 "trsvcid": "54740" 00:16:36.324 }, 00:16:36.324 "auth": { 00:16:36.324 "state": "completed", 00:16:36.324 "digest": "sha384", 00:16:36.324 "dhgroup": "null" 00:16:36.324 } 00:16:36.324 } 00:16:36.324 ]' 00:16:36.324 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.324 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.324 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.589 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:36.589 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.589 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.589 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.589 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.589 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:16:36.589 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:16:37.155 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.414 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.673 00:16:37.673 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.673 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.673 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.932 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.932 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.932 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.932 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.932 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.932 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.932 { 00:16:37.932 "cntlid": 57, 00:16:37.932 "qid": 0, 00:16:37.932 "state": "enabled", 00:16:37.933 "thread": "nvmf_tgt_poll_group_000", 00:16:37.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:37.933 "listen_address": { 00:16:37.933 "trtype": "TCP", 00:16:37.933 "adrfam": "IPv4", 00:16:37.933 "traddr": "10.0.0.2", 00:16:37.933 "trsvcid": "4420" 00:16:37.933 }, 00:16:37.933 "peer_address": { 00:16:37.933 "trtype": "TCP", 00:16:37.933 "adrfam": "IPv4", 00:16:37.933 "traddr": "10.0.0.1", 00:16:37.933 "trsvcid": "54764" 00:16:37.933 }, 00:16:37.933 "auth": { 00:16:37.933 "state": "completed", 00:16:37.933 "digest": "sha384", 00:16:37.933 "dhgroup": "ffdhe2048" 00:16:37.933 } 00:16:37.933 } 00:16:37.933 ]' 00:16:37.933 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.933 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.933 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.191 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.191 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.191 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.191 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.191 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.191 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:16:38.191 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:16:38.757 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.757 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:38.757 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.757 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.757 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.757 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.757 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:38.757 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:39.015 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:39.015 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.015 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:39.015 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:39.015 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:39.015 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.015 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.015 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.015 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.015 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.015 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.015 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.015 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.273 00:16:39.273 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.273 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.273 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.532 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.532 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.532 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.532 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.532 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.532 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.532 { 00:16:39.532 "cntlid": 59, 00:16:39.532 "qid": 0, 00:16:39.532 "state": "enabled", 00:16:39.532 "thread": "nvmf_tgt_poll_group_000", 00:16:39.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:39.532 "listen_address": { 00:16:39.532 "trtype": "TCP", 00:16:39.532 "adrfam": "IPv4", 00:16:39.532 "traddr": "10.0.0.2", 00:16:39.532 "trsvcid": "4420" 00:16:39.532 }, 00:16:39.532 "peer_address": { 00:16:39.532 "trtype": "TCP", 00:16:39.532 "adrfam": "IPv4", 00:16:39.532 "traddr": "10.0.0.1", 00:16:39.532 "trsvcid": "54800" 00:16:39.532 }, 00:16:39.532 "auth": { 00:16:39.532 "state": "completed", 00:16:39.532 "digest": "sha384", 00:16:39.532 "dhgroup": "ffdhe2048" 00:16:39.532 } 00:16:39.532 } 00:16:39.532 ]' 00:16:39.532 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.532 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.532 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.532 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:39.532 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.790 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.790 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.790 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.790 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:39.790 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:40.356 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.356 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:40.356 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.356 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.615 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.615 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.615 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:40.615 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:40.615 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:40.615 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.615 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:40.615 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:40.615 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:40.615 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.615 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.615 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.615 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.615 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.615 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.615 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.615 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.874 00:16:40.874 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.874 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.874 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.133 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.133 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.133 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.133 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.133 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.133 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.133 { 00:16:41.133 "cntlid": 61, 00:16:41.133 "qid": 0, 00:16:41.133 "state": "enabled", 00:16:41.133 "thread": "nvmf_tgt_poll_group_000", 00:16:41.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:41.133 "listen_address": { 00:16:41.133 "trtype": "TCP", 00:16:41.133 "adrfam": "IPv4", 00:16:41.133 "traddr": "10.0.0.2", 00:16:41.133 "trsvcid": "4420" 00:16:41.133 }, 00:16:41.133 "peer_address": { 00:16:41.133 "trtype": "TCP", 00:16:41.133 "adrfam": "IPv4", 00:16:41.133 "traddr": "10.0.0.1", 00:16:41.133 "trsvcid": "54824" 00:16:41.133 }, 00:16:41.133 "auth": { 00:16:41.133 "state": "completed", 00:16:41.133 "digest": "sha384", 00:16:41.133 "dhgroup": "ffdhe2048" 00:16:41.133 } 00:16:41.133 } 00:16:41.133 ]' 00:16:41.133 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.133 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.133 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.133 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:41.133 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.391 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.391 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.391 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.392 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:16:41.392 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:16:41.959 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.959 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:41.959 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.959 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.959 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.959 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.959 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:41.959 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:42.217 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:42.217 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.217 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:42.217 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:42.217 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:42.217 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.217 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:42.218 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.218 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.218 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.218 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:42.218 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.218 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.475 00:16:42.475 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.475 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.475 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.732 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.732 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.732 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.732 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.732 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.732 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.732 { 00:16:42.732 "cntlid": 63, 00:16:42.732 "qid": 0, 00:16:42.732 "state": "enabled", 00:16:42.732 "thread": "nvmf_tgt_poll_group_000", 00:16:42.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:42.732 "listen_address": { 00:16:42.732 "trtype": "TCP", 00:16:42.732 "adrfam": "IPv4", 00:16:42.732 "traddr": "10.0.0.2", 00:16:42.732 "trsvcid": "4420" 00:16:42.732 }, 00:16:42.732 "peer_address": { 00:16:42.732 "trtype": "TCP", 00:16:42.732 "adrfam": "IPv4", 00:16:42.732 "traddr": "10.0.0.1", 00:16:42.732 "trsvcid": "54832" 00:16:42.732 }, 00:16:42.732 "auth": { 00:16:42.732 "state": "completed", 00:16:42.732 "digest": "sha384", 00:16:42.732 "dhgroup": "ffdhe2048" 00:16:42.732 } 00:16:42.732 } 00:16:42.732 ]' 00:16:42.732 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.732 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.732 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.732 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:42.732 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.732 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.732 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.732 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.990 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:16:42.990 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:16:43.557 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.557 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:43.557 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.557 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.557 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.557 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.557 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.557 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:43.558 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:43.816 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:43.816 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.816 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:43.816 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:43.816 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.816 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.816 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.816 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.816 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.816 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.816 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.816 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.816 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.075 00:16:44.075 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.075 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.075 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.334 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.334 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.334 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.334 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.334 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.334 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.334 { 00:16:44.334 "cntlid": 65, 00:16:44.334 "qid": 0, 00:16:44.334 "state": "enabled", 00:16:44.334 "thread": "nvmf_tgt_poll_group_000", 00:16:44.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:44.334 "listen_address": { 00:16:44.334 "trtype": "TCP", 00:16:44.334 "adrfam": "IPv4", 00:16:44.334 "traddr": "10.0.0.2", 00:16:44.334 "trsvcid": "4420" 00:16:44.334 }, 00:16:44.334 "peer_address": { 00:16:44.334 "trtype": "TCP", 00:16:44.334 "adrfam": "IPv4", 00:16:44.334 "traddr": "10.0.0.1", 00:16:44.334 "trsvcid": "54860" 00:16:44.334 }, 00:16:44.334 "auth": { 00:16:44.334 "state": "completed", 00:16:44.334 "digest": "sha384", 00:16:44.334 "dhgroup": "ffdhe3072" 00:16:44.334 } 00:16:44.334 } 00:16:44.334 ]' 00:16:44.334 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.334 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.334 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.334 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.334 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.592 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.592 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.592 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.592 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:16:44.592 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:16:45.159 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.159 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:45.159 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.159 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.159 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.159 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.159 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:45.159 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:45.418 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:45.418 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.418 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:45.418 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:45.418 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:45.418 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.418 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.418 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.418 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.418 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.418 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.418 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.418 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.677 00:16:45.677 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.677 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.677 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.935 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.935 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.935 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.935 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.935 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.935 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.935 { 00:16:45.935 "cntlid": 67, 00:16:45.935 "qid": 0, 00:16:45.935 "state": "enabled", 00:16:45.935 "thread": "nvmf_tgt_poll_group_000", 00:16:45.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:45.935 "listen_address": { 00:16:45.935 "trtype": "TCP", 00:16:45.935 "adrfam": "IPv4", 00:16:45.935 "traddr": "10.0.0.2", 00:16:45.935 "trsvcid": "4420" 00:16:45.935 }, 00:16:45.935 "peer_address": { 00:16:45.935 "trtype": "TCP", 00:16:45.935 "adrfam": "IPv4", 00:16:45.935 "traddr": "10.0.0.1", 00:16:45.935 "trsvcid": "56872" 00:16:45.935 }, 00:16:45.935 "auth": { 00:16:45.935 "state": "completed", 00:16:45.935 "digest": "sha384", 00:16:45.935 "dhgroup": "ffdhe3072" 00:16:45.935 } 00:16:45.935 } 00:16:45.935 ]' 00:16:45.935 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.935 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.935 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.935 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:45.935 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.935 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.194 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.194 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.194 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:46.194 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:46.761 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.761 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:46.761 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.761 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.761 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.761 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.761 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:46.761 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:47.020 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:47.020 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.020 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:47.020 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:47.020 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:47.020 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.020 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.020 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.020 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.020 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.020 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.020 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.020 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.279 00:16:47.279 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.279 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.279 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.537 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.537 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.537 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.537 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.537 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.537 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.537 { 00:16:47.537 "cntlid": 69, 00:16:47.537 "qid": 0, 00:16:47.537 "state": "enabled", 00:16:47.537 "thread": "nvmf_tgt_poll_group_000", 00:16:47.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:47.537 "listen_address": { 00:16:47.537 "trtype": "TCP", 00:16:47.537 "adrfam": "IPv4", 00:16:47.537 "traddr": "10.0.0.2", 00:16:47.537 "trsvcid": "4420" 00:16:47.537 }, 00:16:47.537 "peer_address": { 00:16:47.537 "trtype": "TCP", 00:16:47.537 "adrfam": "IPv4", 00:16:47.537 "traddr": "10.0.0.1", 00:16:47.537 "trsvcid": "56900" 00:16:47.537 }, 00:16:47.537 "auth": { 00:16:47.537 "state": "completed", 00:16:47.537 "digest": "sha384", 00:16:47.537 "dhgroup": "ffdhe3072" 00:16:47.537 } 00:16:47.537 } 00:16:47.537 ]' 00:16:47.537 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.537 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.537 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.537 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:47.537 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.537 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.537 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.537 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.796 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:16:47.796 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:16:48.363 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.363 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:48.363 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.363 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.363 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.363 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.363 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:48.363 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:48.622 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:48.622 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.622 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.622 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:48.622 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:48.622 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.622 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:48.622 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.622 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.622 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.622 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:48.622 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.622 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.881 00:16:48.881 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.881 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.881 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.139 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.139 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.139 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.139 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.139 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.139 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.139 { 00:16:49.139 "cntlid": 71, 00:16:49.139 "qid": 0, 00:16:49.139 "state": "enabled", 00:16:49.139 "thread": "nvmf_tgt_poll_group_000", 00:16:49.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:49.139 "listen_address": { 00:16:49.139 "trtype": "TCP", 00:16:49.139 "adrfam": "IPv4", 00:16:49.139 "traddr": "10.0.0.2", 00:16:49.139 "trsvcid": "4420" 00:16:49.139 }, 00:16:49.139 "peer_address": { 00:16:49.139 "trtype": "TCP", 00:16:49.139 "adrfam": "IPv4", 00:16:49.139 "traddr": "10.0.0.1", 00:16:49.139 "trsvcid": "56918" 00:16:49.139 }, 00:16:49.139 "auth": { 00:16:49.139 "state": "completed", 00:16:49.139 "digest": "sha384", 00:16:49.139 "dhgroup": "ffdhe3072" 00:16:49.139 } 00:16:49.139 } 00:16:49.139 ]' 00:16:49.139 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.139 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.139 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.139 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:49.139 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.139 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.139 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.139 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.398 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:16:49.398 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:16:49.964 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.964 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:49.964 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.964 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.964 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.964 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.964 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.964 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:49.964 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:50.223 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:50.223 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.223 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.223 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:50.223 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:50.223 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.223 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.223 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.223 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.223 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.223 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.223 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.223 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.481 00:16:50.481 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.481 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.481 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.739 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.739 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.739 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.739 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.739 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.739 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.739 { 00:16:50.739 "cntlid": 73, 00:16:50.739 "qid": 0, 00:16:50.739 "state": "enabled", 00:16:50.739 "thread": "nvmf_tgt_poll_group_000", 00:16:50.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:50.739 "listen_address": { 00:16:50.739 "trtype": "TCP", 00:16:50.739 "adrfam": "IPv4", 00:16:50.739 "traddr": "10.0.0.2", 00:16:50.739 "trsvcid": "4420" 00:16:50.739 }, 00:16:50.739 "peer_address": { 00:16:50.739 "trtype": "TCP", 00:16:50.739 "adrfam": "IPv4", 00:16:50.739 "traddr": "10.0.0.1", 00:16:50.739 "trsvcid": "56960" 00:16:50.739 }, 00:16:50.739 "auth": { 00:16:50.739 "state": "completed", 00:16:50.739 "digest": "sha384", 00:16:50.739 "dhgroup": "ffdhe4096" 00:16:50.739 } 00:16:50.739 } 00:16:50.739 ]' 00:16:50.739 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.739 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.739 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.739 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:50.739 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.739 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.739 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.739 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.998 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:16:50.998 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:16:51.565 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.565 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:51.565 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.565 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.565 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.565 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.565 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:51.565 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:51.824 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:51.824 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.824 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:51.824 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:51.824 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:51.824 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.824 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.824 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.824 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.824 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.824 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.824 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.824 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.082 00:16:52.082 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.082 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.082 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.341 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.341 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.341 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.341 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.341 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.341 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.341 { 00:16:52.341 "cntlid": 75, 00:16:52.341 "qid": 0, 00:16:52.341 "state": "enabled", 00:16:52.341 "thread": "nvmf_tgt_poll_group_000", 00:16:52.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:52.341 "listen_address": { 00:16:52.341 "trtype": "TCP", 00:16:52.341 "adrfam": "IPv4", 00:16:52.341 "traddr": "10.0.0.2", 00:16:52.341 "trsvcid": "4420" 00:16:52.341 }, 00:16:52.341 "peer_address": { 00:16:52.341 "trtype": "TCP", 00:16:52.341 "adrfam": "IPv4", 00:16:52.341 "traddr": "10.0.0.1", 00:16:52.341 "trsvcid": "56980" 00:16:52.341 }, 00:16:52.341 "auth": { 00:16:52.341 "state": "completed", 00:16:52.341 "digest": "sha384", 00:16:52.341 "dhgroup": "ffdhe4096" 00:16:52.341 } 00:16:52.341 } 00:16:52.341 ]' 00:16:52.341 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.341 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.341 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.341 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:52.341 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.341 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.341 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.341 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.599 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:52.599 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:53.166 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.166 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:53.166 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.166 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.166 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.166 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.166 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:53.166 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:53.425 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:53.425 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.425 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.425 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:53.425 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:53.425 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.425 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.425 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.425 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.425 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.425 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.425 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.425 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.683 00:16:53.683 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.683 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.683 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.942 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.942 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.942 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.942 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.942 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.942 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.942 { 00:16:53.942 "cntlid": 77, 00:16:53.942 "qid": 0, 00:16:53.942 "state": "enabled", 00:16:53.942 "thread": "nvmf_tgt_poll_group_000", 00:16:53.942 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:53.942 "listen_address": { 00:16:53.942 "trtype": "TCP", 00:16:53.942 "adrfam": "IPv4", 00:16:53.942 "traddr": "10.0.0.2", 00:16:53.942 "trsvcid": "4420" 00:16:53.942 }, 00:16:53.942 "peer_address": { 00:16:53.942 "trtype": "TCP", 00:16:53.942 "adrfam": "IPv4", 00:16:53.942 "traddr": "10.0.0.1", 00:16:53.942 "trsvcid": "56990" 00:16:53.942 }, 00:16:53.942 "auth": { 00:16:53.942 "state": "completed", 00:16:53.942 "digest": "sha384", 00:16:53.942 "dhgroup": "ffdhe4096" 00:16:53.942 } 00:16:53.942 } 00:16:53.942 ]' 00:16:53.942 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.942 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.942 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.942 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.942 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.942 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.942 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.942 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.201 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:16:54.201 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:16:54.870 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.870 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:54.870 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.870 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.870 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.870 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.870 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:54.870 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:55.129 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:55.129 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.129 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.129 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:55.129 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:55.129 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.129 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:55.129 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.129 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.129 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.129 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.129 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.129 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.387 00:16:55.387 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.387 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.387 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.387 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.387 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.387 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.387 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.646 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.646 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.646 { 00:16:55.646 "cntlid": 79, 00:16:55.646 "qid": 0, 00:16:55.646 "state": "enabled", 00:16:55.646 "thread": "nvmf_tgt_poll_group_000", 00:16:55.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:55.646 "listen_address": { 00:16:55.646 "trtype": "TCP", 00:16:55.646 "adrfam": "IPv4", 00:16:55.646 "traddr": "10.0.0.2", 00:16:55.646 "trsvcid": "4420" 00:16:55.646 }, 00:16:55.646 "peer_address": { 00:16:55.646 "trtype": "TCP", 00:16:55.646 "adrfam": "IPv4", 00:16:55.646 "traddr": "10.0.0.1", 00:16:55.646 "trsvcid": "46944" 00:16:55.646 }, 00:16:55.646 "auth": { 00:16:55.646 "state": "completed", 00:16:55.646 "digest": "sha384", 00:16:55.646 "dhgroup": "ffdhe4096" 00:16:55.646 } 00:16:55.646 } 00:16:55.646 ]' 00:16:55.646 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.646 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.646 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.646 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:55.646 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.646 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.646 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.646 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.904 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:16:55.904 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:16:56.470 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.470 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:56.470 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.470 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.470 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.470 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.470 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.470 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.470 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.729 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:56.729 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.729 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:56.729 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:56.729 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:56.729 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.729 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.729 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.729 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.729 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.729 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.729 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.729 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.987 00:16:56.987 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.987 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.987 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.245 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.245 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.245 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.245 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.245 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.245 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.245 { 00:16:57.245 "cntlid": 81, 00:16:57.245 "qid": 0, 00:16:57.245 "state": "enabled", 00:16:57.245 "thread": "nvmf_tgt_poll_group_000", 00:16:57.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:57.245 "listen_address": { 00:16:57.245 "trtype": "TCP", 00:16:57.245 "adrfam": "IPv4", 00:16:57.245 "traddr": "10.0.0.2", 00:16:57.245 "trsvcid": "4420" 00:16:57.245 }, 00:16:57.245 "peer_address": { 00:16:57.245 "trtype": "TCP", 00:16:57.245 "adrfam": "IPv4", 00:16:57.245 "traddr": "10.0.0.1", 00:16:57.245 "trsvcid": "46970" 00:16:57.245 }, 00:16:57.245 "auth": { 00:16:57.245 "state": "completed", 00:16:57.245 "digest": "sha384", 00:16:57.245 "dhgroup": "ffdhe6144" 00:16:57.245 } 00:16:57.245 } 00:16:57.245 ]' 00:16:57.245 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.245 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.245 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.245 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:57.246 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.246 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.246 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.246 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.504 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:16:57.504 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:16:58.071 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.071 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:58.071 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.071 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.071 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.071 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.071 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:58.071 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:58.329 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:58.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:58.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:58.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.588 00:16:58.588 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.847 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.847 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.847 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.847 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.847 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.847 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.847 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.847 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.847 { 00:16:58.847 "cntlid": 83, 00:16:58.847 "qid": 0, 00:16:58.847 "state": "enabled", 00:16:58.847 "thread": "nvmf_tgt_poll_group_000", 00:16:58.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:58.847 "listen_address": { 00:16:58.847 "trtype": "TCP", 00:16:58.847 "adrfam": "IPv4", 00:16:58.847 "traddr": "10.0.0.2", 00:16:58.847 "trsvcid": "4420" 00:16:58.847 }, 00:16:58.847 "peer_address": { 00:16:58.847 "trtype": "TCP", 00:16:58.847 "adrfam": "IPv4", 00:16:58.847 "traddr": "10.0.0.1", 00:16:58.847 "trsvcid": "47006" 00:16:58.847 }, 00:16:58.847 "auth": { 00:16:58.847 "state": "completed", 00:16:58.847 "digest": "sha384", 00:16:58.847 "dhgroup": "ffdhe6144" 00:16:58.847 } 00:16:58.847 } 00:16:58.847 ]' 00:16:58.847 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.847 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.847 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.105 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:59.105 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.105 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.105 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.105 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.364 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:59.364 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:16:59.931 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.931 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:59.931 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.931 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.931 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.931 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.931 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:59.931 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:00.190 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:00.190 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.190 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.190 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:00.190 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:00.190 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.190 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.190 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.190 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.190 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.190 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.190 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.190 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.448 00:17:00.448 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.448 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.448 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.706 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.706 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.706 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.706 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.706 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.706 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.706 { 00:17:00.706 "cntlid": 85, 00:17:00.706 "qid": 0, 00:17:00.706 "state": "enabled", 00:17:00.706 "thread": "nvmf_tgt_poll_group_000", 00:17:00.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:00.706 "listen_address": { 00:17:00.706 "trtype": "TCP", 00:17:00.706 "adrfam": "IPv4", 00:17:00.706 "traddr": "10.0.0.2", 00:17:00.706 "trsvcid": "4420" 00:17:00.706 }, 00:17:00.706 "peer_address": { 00:17:00.706 "trtype": "TCP", 00:17:00.706 "adrfam": "IPv4", 00:17:00.706 "traddr": "10.0.0.1", 00:17:00.706 "trsvcid": "47042" 00:17:00.706 }, 00:17:00.706 "auth": { 00:17:00.706 "state": "completed", 00:17:00.706 "digest": "sha384", 00:17:00.706 "dhgroup": "ffdhe6144" 00:17:00.706 } 00:17:00.706 } 00:17:00.706 ]' 00:17:00.706 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.706 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.706 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.706 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:00.706 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.706 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.706 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.706 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.964 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:17:00.965 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:17:01.531 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.531 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:01.531 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.531 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.531 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.531 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.531 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:01.531 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:01.788 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:01.788 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.788 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:01.788 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:01.788 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:01.788 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.788 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:01.788 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.788 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.788 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.788 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.788 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.788 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.047 00:17:02.047 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.047 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.047 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.305 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.306 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.306 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.306 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.306 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.306 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.306 { 00:17:02.306 "cntlid": 87, 00:17:02.306 "qid": 0, 00:17:02.306 "state": "enabled", 00:17:02.306 "thread": "nvmf_tgt_poll_group_000", 00:17:02.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:02.306 "listen_address": { 00:17:02.306 "trtype": "TCP", 00:17:02.306 "adrfam": "IPv4", 00:17:02.306 "traddr": "10.0.0.2", 00:17:02.306 "trsvcid": "4420" 00:17:02.306 }, 00:17:02.306 "peer_address": { 00:17:02.306 "trtype": "TCP", 00:17:02.306 "adrfam": "IPv4", 00:17:02.306 "traddr": "10.0.0.1", 00:17:02.306 "trsvcid": "47070" 00:17:02.306 }, 00:17:02.306 "auth": { 00:17:02.306 "state": "completed", 00:17:02.306 "digest": "sha384", 00:17:02.306 "dhgroup": "ffdhe6144" 00:17:02.306 } 00:17:02.306 } 00:17:02.306 ]' 00:17:02.306 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.306 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.306 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.564 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:02.564 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.564 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.564 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.564 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.564 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:02.564 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:03.131 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.131 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:03.131 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.131 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.389 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.389 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.389 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.389 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:03.389 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:03.389 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:03.389 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.389 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.389 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:03.389 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.389 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.389 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.389 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.389 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.389 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.389 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.389 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.389 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.956 00:17:03.956 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.956 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.956 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.215 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.215 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.215 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.215 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.215 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.215 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.215 { 00:17:04.215 "cntlid": 89, 00:17:04.215 "qid": 0, 00:17:04.215 "state": "enabled", 00:17:04.215 "thread": "nvmf_tgt_poll_group_000", 00:17:04.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:04.215 "listen_address": { 00:17:04.215 "trtype": "TCP", 00:17:04.215 "adrfam": "IPv4", 00:17:04.215 "traddr": "10.0.0.2", 00:17:04.215 "trsvcid": "4420" 00:17:04.215 }, 00:17:04.215 "peer_address": { 00:17:04.215 "trtype": "TCP", 00:17:04.215 "adrfam": "IPv4", 00:17:04.215 "traddr": "10.0.0.1", 00:17:04.215 "trsvcid": "47098" 00:17:04.215 }, 00:17:04.215 "auth": { 00:17:04.215 "state": "completed", 00:17:04.215 "digest": "sha384", 00:17:04.215 "dhgroup": "ffdhe8192" 00:17:04.215 } 00:17:04.215 } 00:17:04.215 ]' 00:17:04.215 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.215 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.215 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.215 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.215 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.215 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.215 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.215 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.473 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:17:04.473 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:17:05.040 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.040 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:05.040 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.040 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.040 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.040 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.040 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:05.040 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:05.299 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:05.299 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.299 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.299 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:05.299 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:05.299 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.299 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.299 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.299 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.299 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.299 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.299 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.299 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.866 00:17:05.866 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.866 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.866 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.866 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.867 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.867 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.867 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.867 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.867 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.867 { 00:17:05.867 "cntlid": 91, 00:17:05.867 "qid": 0, 00:17:05.867 "state": "enabled", 00:17:05.867 "thread": "nvmf_tgt_poll_group_000", 00:17:05.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:05.867 "listen_address": { 00:17:05.867 "trtype": "TCP", 00:17:05.867 "adrfam": "IPv4", 00:17:05.867 "traddr": "10.0.0.2", 00:17:05.867 "trsvcid": "4420" 00:17:05.867 }, 00:17:05.867 "peer_address": { 00:17:05.867 "trtype": "TCP", 00:17:05.867 "adrfam": "IPv4", 00:17:05.867 "traddr": "10.0.0.1", 00:17:05.867 "trsvcid": "58248" 00:17:05.867 }, 00:17:05.867 "auth": { 00:17:05.867 "state": "completed", 00:17:05.867 "digest": "sha384", 00:17:05.867 "dhgroup": "ffdhe8192" 00:17:05.867 } 00:17:05.867 } 00:17:05.867 ]' 00:17:05.867 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.125 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.125 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.125 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:06.125 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.125 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.125 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.125 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.383 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:17:06.383 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:17:06.951 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.951 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:06.951 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.951 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.951 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.951 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.951 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:06.951 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:07.209 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:07.209 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.209 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.209 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:07.209 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:07.210 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.210 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.210 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.210 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.210 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.210 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.210 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.210 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.469 00:17:07.469 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.469 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.469 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.727 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.727 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.727 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.727 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.727 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.727 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.727 { 00:17:07.727 "cntlid": 93, 00:17:07.727 "qid": 0, 00:17:07.727 "state": "enabled", 00:17:07.727 "thread": "nvmf_tgt_poll_group_000", 00:17:07.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:07.727 "listen_address": { 00:17:07.727 "trtype": "TCP", 00:17:07.727 "adrfam": "IPv4", 00:17:07.727 "traddr": "10.0.0.2", 00:17:07.727 "trsvcid": "4420" 00:17:07.727 }, 00:17:07.727 "peer_address": { 00:17:07.727 "trtype": "TCP", 00:17:07.727 "adrfam": "IPv4", 00:17:07.727 "traddr": "10.0.0.1", 00:17:07.727 "trsvcid": "58270" 00:17:07.727 }, 00:17:07.727 "auth": { 00:17:07.727 "state": "completed", 00:17:07.727 "digest": "sha384", 00:17:07.727 "dhgroup": "ffdhe8192" 00:17:07.727 } 00:17:07.727 } 00:17:07.727 ]' 00:17:07.727 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.727 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.727 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.986 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.986 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.986 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.986 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.986 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.244 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:17:08.245 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.812 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.813 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.380 00:17:09.380 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.380 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.380 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.638 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.638 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.638 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.638 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.638 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.638 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.638 { 00:17:09.638 "cntlid": 95, 00:17:09.638 "qid": 0, 00:17:09.638 "state": "enabled", 00:17:09.638 "thread": "nvmf_tgt_poll_group_000", 00:17:09.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:09.638 "listen_address": { 00:17:09.638 "trtype": "TCP", 00:17:09.638 "adrfam": "IPv4", 00:17:09.638 "traddr": "10.0.0.2", 00:17:09.638 "trsvcid": "4420" 00:17:09.638 }, 00:17:09.638 "peer_address": { 00:17:09.638 "trtype": "TCP", 00:17:09.638 "adrfam": "IPv4", 00:17:09.638 "traddr": "10.0.0.1", 00:17:09.638 "trsvcid": "58296" 00:17:09.638 }, 00:17:09.638 "auth": { 00:17:09.638 "state": "completed", 00:17:09.638 "digest": "sha384", 00:17:09.638 "dhgroup": "ffdhe8192" 00:17:09.638 } 00:17:09.638 } 00:17:09.638 ]' 00:17:09.639 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.639 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.639 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.639 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:09.639 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.639 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.639 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.639 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.897 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:09.897 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:10.464 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.464 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:10.464 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.464 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.464 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.464 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:10.464 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.464 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.464 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:10.464 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:10.723 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:10.723 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.723 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.723 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:10.723 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.723 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.723 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.723 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.723 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.723 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.723 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.723 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.723 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.982 00:17:10.982 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.982 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.982 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.240 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.240 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.240 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.240 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.240 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.240 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.240 { 00:17:11.240 "cntlid": 97, 00:17:11.240 "qid": 0, 00:17:11.240 "state": "enabled", 00:17:11.240 "thread": "nvmf_tgt_poll_group_000", 00:17:11.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:11.240 "listen_address": { 00:17:11.240 "trtype": "TCP", 00:17:11.240 "adrfam": "IPv4", 00:17:11.240 "traddr": "10.0.0.2", 00:17:11.240 "trsvcid": "4420" 00:17:11.240 }, 00:17:11.240 "peer_address": { 00:17:11.240 "trtype": "TCP", 00:17:11.240 "adrfam": "IPv4", 00:17:11.240 "traddr": "10.0.0.1", 00:17:11.240 "trsvcid": "58322" 00:17:11.240 }, 00:17:11.240 "auth": { 00:17:11.240 "state": "completed", 00:17:11.240 "digest": "sha512", 00:17:11.240 "dhgroup": "null" 00:17:11.240 } 00:17:11.240 } 00:17:11.240 ]' 00:17:11.240 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.240 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.240 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.240 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:11.240 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.240 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.240 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.240 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.499 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:17:11.499 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:17:12.066 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.066 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:12.066 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.066 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.066 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.066 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.066 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:12.066 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:12.325 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:12.325 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.325 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.325 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:12.325 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:12.325 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.325 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.325 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.325 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.325 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.325 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.325 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.325 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.584 00:17:12.584 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.584 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.584 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.842 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.842 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.842 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.842 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.842 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.842 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.842 { 00:17:12.842 "cntlid": 99, 00:17:12.842 "qid": 0, 00:17:12.842 "state": "enabled", 00:17:12.842 "thread": "nvmf_tgt_poll_group_000", 00:17:12.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:12.842 "listen_address": { 00:17:12.842 "trtype": "TCP", 00:17:12.842 "adrfam": "IPv4", 00:17:12.842 "traddr": "10.0.0.2", 00:17:12.842 "trsvcid": "4420" 00:17:12.842 }, 00:17:12.842 "peer_address": { 00:17:12.842 "trtype": "TCP", 00:17:12.842 "adrfam": "IPv4", 00:17:12.842 "traddr": "10.0.0.1", 00:17:12.842 "trsvcid": "58344" 00:17:12.842 }, 00:17:12.842 "auth": { 00:17:12.842 "state": "completed", 00:17:12.842 "digest": "sha512", 00:17:12.842 "dhgroup": "null" 00:17:12.842 } 00:17:12.842 } 00:17:12.842 ]' 00:17:12.842 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.842 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.842 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.842 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:12.842 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.842 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.842 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.842 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.101 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:17:13.101 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:17:13.668 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.668 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:13.668 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.668 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.668 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.668 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.668 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:13.668 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:13.927 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:13.927 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.927 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.927 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:13.927 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:13.927 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.927 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.927 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.927 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.927 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.927 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.927 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.927 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.185 00:17:14.186 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.186 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.186 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.444 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.444 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.444 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.444 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.444 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.444 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.444 { 00:17:14.444 "cntlid": 101, 00:17:14.444 "qid": 0, 00:17:14.444 "state": "enabled", 00:17:14.444 "thread": "nvmf_tgt_poll_group_000", 00:17:14.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:14.444 "listen_address": { 00:17:14.444 "trtype": "TCP", 00:17:14.444 "adrfam": "IPv4", 00:17:14.444 "traddr": "10.0.0.2", 00:17:14.444 "trsvcid": "4420" 00:17:14.444 }, 00:17:14.444 "peer_address": { 00:17:14.444 "trtype": "TCP", 00:17:14.444 "adrfam": "IPv4", 00:17:14.444 "traddr": "10.0.0.1", 00:17:14.444 "trsvcid": "58362" 00:17:14.444 }, 00:17:14.444 "auth": { 00:17:14.444 "state": "completed", 00:17:14.444 "digest": "sha512", 00:17:14.444 "dhgroup": "null" 00:17:14.444 } 00:17:14.444 } 00:17:14.444 ]' 00:17:14.444 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.444 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.444 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.444 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:14.444 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.444 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.444 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.444 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.702 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:17:14.702 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:17:15.269 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.269 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:15.269 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.269 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.269 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.269 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.269 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:15.269 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:15.527 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:15.527 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.527 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:15.527 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:15.527 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:15.527 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.527 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:15.527 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.527 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.527 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.527 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:15.527 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.527 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.785 00:17:15.785 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.785 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.785 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.785 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.785 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.785 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.785 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.785 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.043 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.043 { 00:17:16.043 "cntlid": 103, 00:17:16.043 "qid": 0, 00:17:16.043 "state": "enabled", 00:17:16.043 "thread": "nvmf_tgt_poll_group_000", 00:17:16.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:16.043 "listen_address": { 00:17:16.043 "trtype": "TCP", 00:17:16.043 "adrfam": "IPv4", 00:17:16.043 "traddr": "10.0.0.2", 00:17:16.043 "trsvcid": "4420" 00:17:16.043 }, 00:17:16.043 "peer_address": { 00:17:16.043 "trtype": "TCP", 00:17:16.043 "adrfam": "IPv4", 00:17:16.043 "traddr": "10.0.0.1", 00:17:16.043 "trsvcid": "57656" 00:17:16.043 }, 00:17:16.043 "auth": { 00:17:16.043 "state": "completed", 00:17:16.043 "digest": "sha512", 00:17:16.043 "dhgroup": "null" 00:17:16.043 } 00:17:16.043 } 00:17:16.043 ]' 00:17:16.043 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.043 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.043 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.043 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:16.043 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.043 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.044 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.044 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.302 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:16.302 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:16.868 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.868 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:16.868 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.868 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.869 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.869 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.869 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.869 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:16.869 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:16.869 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:16.869 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.869 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.869 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:16.869 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:16.869 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.869 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.869 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.869 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.126 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.126 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.126 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.127 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.127 00:17:17.127 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.127 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.127 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.385 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.385 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.385 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.385 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.385 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.385 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.385 { 00:17:17.385 "cntlid": 105, 00:17:17.385 "qid": 0, 00:17:17.385 "state": "enabled", 00:17:17.385 "thread": "nvmf_tgt_poll_group_000", 00:17:17.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:17.385 "listen_address": { 00:17:17.385 "trtype": "TCP", 00:17:17.385 "adrfam": "IPv4", 00:17:17.385 "traddr": "10.0.0.2", 00:17:17.385 "trsvcid": "4420" 00:17:17.385 }, 00:17:17.385 "peer_address": { 00:17:17.385 "trtype": "TCP", 00:17:17.385 "adrfam": "IPv4", 00:17:17.385 "traddr": "10.0.0.1", 00:17:17.385 "trsvcid": "57684" 00:17:17.385 }, 00:17:17.385 "auth": { 00:17:17.385 "state": "completed", 00:17:17.385 "digest": "sha512", 00:17:17.385 "dhgroup": "ffdhe2048" 00:17:17.385 } 00:17:17.385 } 00:17:17.385 ]' 00:17:17.385 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.385 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.385 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.643 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:17.643 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.643 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.643 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.643 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.901 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:17:17.901 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.468 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.727 00:17:18.727 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.727 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.727 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.985 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.985 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.985 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.985 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.985 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.985 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.985 { 00:17:18.985 "cntlid": 107, 00:17:18.985 "qid": 0, 00:17:18.985 "state": "enabled", 00:17:18.985 "thread": "nvmf_tgt_poll_group_000", 00:17:18.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:18.985 "listen_address": { 00:17:18.985 "trtype": "TCP", 00:17:18.985 "adrfam": "IPv4", 00:17:18.985 "traddr": "10.0.0.2", 00:17:18.985 "trsvcid": "4420" 00:17:18.985 }, 00:17:18.985 "peer_address": { 00:17:18.985 "trtype": "TCP", 00:17:18.985 "adrfam": "IPv4", 00:17:18.985 "traddr": "10.0.0.1", 00:17:18.985 "trsvcid": "57706" 00:17:18.985 }, 00:17:18.985 "auth": { 00:17:18.985 "state": "completed", 00:17:18.985 "digest": "sha512", 00:17:18.985 "dhgroup": "ffdhe2048" 00:17:18.985 } 00:17:18.985 } 00:17:18.985 ]' 00:17:18.985 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.985 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.985 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.245 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:19.245 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.245 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.245 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.245 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.245 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:17:19.245 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:17:19.811 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.811 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:19.811 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.811 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.070 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.070 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.070 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:20.070 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:20.070 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:20.070 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.070 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:20.070 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:20.070 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:20.070 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.070 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.070 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.070 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.070 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.070 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.070 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.070 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.329 00:17:20.329 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.329 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.329 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.588 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.588 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.588 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.588 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.588 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.588 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.588 { 00:17:20.588 "cntlid": 109, 00:17:20.588 "qid": 0, 00:17:20.588 "state": "enabled", 00:17:20.588 "thread": "nvmf_tgt_poll_group_000", 00:17:20.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:20.588 "listen_address": { 00:17:20.588 "trtype": "TCP", 00:17:20.588 "adrfam": "IPv4", 00:17:20.588 "traddr": "10.0.0.2", 00:17:20.588 "trsvcid": "4420" 00:17:20.588 }, 00:17:20.588 "peer_address": { 00:17:20.588 "trtype": "TCP", 00:17:20.588 "adrfam": "IPv4", 00:17:20.588 "traddr": "10.0.0.1", 00:17:20.588 "trsvcid": "57740" 00:17:20.588 }, 00:17:20.588 "auth": { 00:17:20.588 "state": "completed", 00:17:20.588 "digest": "sha512", 00:17:20.588 "dhgroup": "ffdhe2048" 00:17:20.588 } 00:17:20.588 } 00:17:20.588 ]' 00:17:20.588 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.588 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.588 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.588 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:20.846 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.846 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.846 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.846 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.846 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:17:20.846 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:17:21.413 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.672 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:21.672 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.672 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.672 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.672 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.672 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:21.672 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:21.672 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:21.672 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.672 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.672 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:21.672 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:21.672 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.672 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:21.672 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.672 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.672 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.672 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:21.672 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.672 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.930 00:17:21.930 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.930 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.930 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.189 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.189 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.189 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.189 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.189 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.189 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.189 { 00:17:22.189 "cntlid": 111, 00:17:22.189 "qid": 0, 00:17:22.189 "state": "enabled", 00:17:22.189 "thread": "nvmf_tgt_poll_group_000", 00:17:22.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:22.189 "listen_address": { 00:17:22.189 "trtype": "TCP", 00:17:22.189 "adrfam": "IPv4", 00:17:22.189 "traddr": "10.0.0.2", 00:17:22.189 "trsvcid": "4420" 00:17:22.189 }, 00:17:22.189 "peer_address": { 00:17:22.189 "trtype": "TCP", 00:17:22.189 "adrfam": "IPv4", 00:17:22.189 "traddr": "10.0.0.1", 00:17:22.189 "trsvcid": "57756" 00:17:22.189 }, 00:17:22.189 "auth": { 00:17:22.189 "state": "completed", 00:17:22.189 "digest": "sha512", 00:17:22.189 "dhgroup": "ffdhe2048" 00:17:22.189 } 00:17:22.189 } 00:17:22.189 ]' 00:17:22.189 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.189 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.189 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.189 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:22.189 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.447 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.447 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.447 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.447 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:22.447 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:23.014 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.271 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:23.271 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.271 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.271 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.271 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.271 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.271 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:23.271 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:23.271 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:23.271 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.271 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:23.271 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:23.271 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:23.271 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.272 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.272 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.272 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.272 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.272 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.272 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.272 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.530 00:17:23.530 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.530 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.530 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.788 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.788 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.788 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.788 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.788 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.788 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.788 { 00:17:23.788 "cntlid": 113, 00:17:23.788 "qid": 0, 00:17:23.788 "state": "enabled", 00:17:23.788 "thread": "nvmf_tgt_poll_group_000", 00:17:23.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:23.788 "listen_address": { 00:17:23.788 "trtype": "TCP", 00:17:23.788 "adrfam": "IPv4", 00:17:23.788 "traddr": "10.0.0.2", 00:17:23.788 "trsvcid": "4420" 00:17:23.788 }, 00:17:23.788 "peer_address": { 00:17:23.788 "trtype": "TCP", 00:17:23.788 "adrfam": "IPv4", 00:17:23.788 "traddr": "10.0.0.1", 00:17:23.788 "trsvcid": "57798" 00:17:23.788 }, 00:17:23.788 "auth": { 00:17:23.788 "state": "completed", 00:17:23.788 "digest": "sha512", 00:17:23.788 "dhgroup": "ffdhe3072" 00:17:23.788 } 00:17:23.788 } 00:17:23.788 ]' 00:17:23.788 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.788 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.788 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.047 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:24.047 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.047 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.047 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.047 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.305 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:17:24.305 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:17:24.872 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.873 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.131 00:17:25.131 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.131 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.131 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.390 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.390 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.390 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.390 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.390 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.390 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.390 { 00:17:25.390 "cntlid": 115, 00:17:25.390 "qid": 0, 00:17:25.390 "state": "enabled", 00:17:25.390 "thread": "nvmf_tgt_poll_group_000", 00:17:25.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:25.390 "listen_address": { 00:17:25.390 "trtype": "TCP", 00:17:25.390 "adrfam": "IPv4", 00:17:25.390 "traddr": "10.0.0.2", 00:17:25.390 "trsvcid": "4420" 00:17:25.390 }, 00:17:25.390 "peer_address": { 00:17:25.390 "trtype": "TCP", 00:17:25.390 "adrfam": "IPv4", 00:17:25.390 "traddr": "10.0.0.1", 00:17:25.390 "trsvcid": "59028" 00:17:25.390 }, 00:17:25.390 "auth": { 00:17:25.390 "state": "completed", 00:17:25.390 "digest": "sha512", 00:17:25.390 "dhgroup": "ffdhe3072" 00:17:25.390 } 00:17:25.390 } 00:17:25.390 ]' 00:17:25.390 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.390 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.390 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.649 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:25.649 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.649 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.649 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.649 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.649 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:17:25.649 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:17:26.215 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.474 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:26.474 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.474 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.474 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.474 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.474 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:26.474 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:26.474 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:26.474 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.474 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:26.474 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:26.474 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:26.474 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.474 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.474 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.474 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.474 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.474 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.474 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.474 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.732 00:17:26.732 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.732 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.732 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.991 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.991 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.991 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.991 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.991 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.991 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.991 { 00:17:26.991 "cntlid": 117, 00:17:26.991 "qid": 0, 00:17:26.991 "state": "enabled", 00:17:26.991 "thread": "nvmf_tgt_poll_group_000", 00:17:26.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:26.991 "listen_address": { 00:17:26.991 "trtype": "TCP", 00:17:26.991 "adrfam": "IPv4", 00:17:26.991 "traddr": "10.0.0.2", 00:17:26.991 "trsvcid": "4420" 00:17:26.991 }, 00:17:26.991 "peer_address": { 00:17:26.991 "trtype": "TCP", 00:17:26.991 "adrfam": "IPv4", 00:17:26.991 "traddr": "10.0.0.1", 00:17:26.991 "trsvcid": "59054" 00:17:26.991 }, 00:17:26.991 "auth": { 00:17:26.991 "state": "completed", 00:17:26.991 "digest": "sha512", 00:17:26.991 "dhgroup": "ffdhe3072" 00:17:26.991 } 00:17:26.991 } 00:17:26.991 ]' 00:17:26.991 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.991 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.991 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.249 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:27.249 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.249 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.250 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.250 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.250 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:17:27.250 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:17:27.816 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.076 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.334 00:17:28.334 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.334 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.334 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.593 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.593 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.593 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.593 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.593 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.593 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.593 { 00:17:28.593 "cntlid": 119, 00:17:28.593 "qid": 0, 00:17:28.593 "state": "enabled", 00:17:28.593 "thread": "nvmf_tgt_poll_group_000", 00:17:28.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:28.593 "listen_address": { 00:17:28.594 "trtype": "TCP", 00:17:28.594 "adrfam": "IPv4", 00:17:28.594 "traddr": "10.0.0.2", 00:17:28.594 "trsvcid": "4420" 00:17:28.594 }, 00:17:28.594 "peer_address": { 00:17:28.594 "trtype": "TCP", 00:17:28.594 "adrfam": "IPv4", 00:17:28.594 "traddr": "10.0.0.1", 00:17:28.594 "trsvcid": "59080" 00:17:28.594 }, 00:17:28.594 "auth": { 00:17:28.594 "state": "completed", 00:17:28.594 "digest": "sha512", 00:17:28.594 "dhgroup": "ffdhe3072" 00:17:28.594 } 00:17:28.594 } 00:17:28.594 ]' 00:17:28.594 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.594 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.594 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.852 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:28.852 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.852 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.852 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.852 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.852 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:28.852 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:29.419 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.678 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.936 00:17:30.195 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.195 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.195 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.195 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.195 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.195 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.195 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.195 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.195 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.195 { 00:17:30.195 "cntlid": 121, 00:17:30.195 "qid": 0, 00:17:30.195 "state": "enabled", 00:17:30.195 "thread": "nvmf_tgt_poll_group_000", 00:17:30.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:30.195 "listen_address": { 00:17:30.195 "trtype": "TCP", 00:17:30.195 "adrfam": "IPv4", 00:17:30.195 "traddr": "10.0.0.2", 00:17:30.195 "trsvcid": "4420" 00:17:30.195 }, 00:17:30.195 "peer_address": { 00:17:30.195 "trtype": "TCP", 00:17:30.195 "adrfam": "IPv4", 00:17:30.195 "traddr": "10.0.0.1", 00:17:30.195 "trsvcid": "59106" 00:17:30.195 }, 00:17:30.195 "auth": { 00:17:30.195 "state": "completed", 00:17:30.195 "digest": "sha512", 00:17:30.195 "dhgroup": "ffdhe4096" 00:17:30.195 } 00:17:30.195 } 00:17:30.195 ]' 00:17:30.195 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.195 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.195 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.453 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:30.453 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.453 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.453 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.453 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.711 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:17:30.711 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:17:31.278 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.278 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:31.278 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.278 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.278 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.278 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.278 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:31.278 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:31.278 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:31.278 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.278 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.278 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:31.278 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:31.278 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.278 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.278 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.278 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.278 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.537 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.537 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.537 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.829 00:17:31.829 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.829 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.829 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.829 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.829 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.829 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.829 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.829 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.829 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.829 { 00:17:31.829 "cntlid": 123, 00:17:31.829 "qid": 0, 00:17:31.829 "state": "enabled", 00:17:31.829 "thread": "nvmf_tgt_poll_group_000", 00:17:31.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:31.829 "listen_address": { 00:17:31.829 "trtype": "TCP", 00:17:31.829 "adrfam": "IPv4", 00:17:31.829 "traddr": "10.0.0.2", 00:17:31.829 "trsvcid": "4420" 00:17:31.829 }, 00:17:31.829 "peer_address": { 00:17:31.829 "trtype": "TCP", 00:17:31.829 "adrfam": "IPv4", 00:17:31.829 "traddr": "10.0.0.1", 00:17:31.829 "trsvcid": "59138" 00:17:31.829 }, 00:17:31.829 "auth": { 00:17:31.829 "state": "completed", 00:17:31.829 "digest": "sha512", 00:17:31.829 "dhgroup": "ffdhe4096" 00:17:31.829 } 00:17:31.829 } 00:17:31.829 ]' 00:17:31.829 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.829 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.118 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.118 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:32.118 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.118 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.118 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.118 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.118 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:17:32.118 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:17:32.714 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.714 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:32.714 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.714 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.714 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.714 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.714 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:32.714 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:32.973 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:32.973 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.973 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.973 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:32.973 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:32.973 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.973 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.973 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.973 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.973 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.973 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.973 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.973 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.232 00:17:33.232 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.232 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.232 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.491 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.491 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.491 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.491 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.491 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.491 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.491 { 00:17:33.491 "cntlid": 125, 00:17:33.491 "qid": 0, 00:17:33.491 "state": "enabled", 00:17:33.491 "thread": "nvmf_tgt_poll_group_000", 00:17:33.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:33.491 "listen_address": { 00:17:33.491 "trtype": "TCP", 00:17:33.491 "adrfam": "IPv4", 00:17:33.491 "traddr": "10.0.0.2", 00:17:33.491 "trsvcid": "4420" 00:17:33.491 }, 00:17:33.491 "peer_address": { 00:17:33.491 "trtype": "TCP", 00:17:33.491 "adrfam": "IPv4", 00:17:33.491 "traddr": "10.0.0.1", 00:17:33.491 "trsvcid": "59180" 00:17:33.491 }, 00:17:33.491 "auth": { 00:17:33.491 "state": "completed", 00:17:33.491 "digest": "sha512", 00:17:33.491 "dhgroup": "ffdhe4096" 00:17:33.491 } 00:17:33.491 } 00:17:33.491 ]' 00:17:33.491 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.491 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.491 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.491 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:33.491 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.749 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.749 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.749 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.749 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:17:33.749 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:17:34.316 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.316 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:34.316 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.316 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.316 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.316 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.316 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:34.316 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:34.575 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:34.575 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.575 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.575 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:34.575 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:34.575 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.575 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:34.575 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.575 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.575 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.575 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.575 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.575 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.834 00:17:34.834 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.834 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.834 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.092 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.092 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.092 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.092 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.092 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.092 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.092 { 00:17:35.092 "cntlid": 127, 00:17:35.092 "qid": 0, 00:17:35.092 "state": "enabled", 00:17:35.092 "thread": "nvmf_tgt_poll_group_000", 00:17:35.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:35.092 "listen_address": { 00:17:35.092 "trtype": "TCP", 00:17:35.092 "adrfam": "IPv4", 00:17:35.092 "traddr": "10.0.0.2", 00:17:35.092 "trsvcid": "4420" 00:17:35.092 }, 00:17:35.092 "peer_address": { 00:17:35.092 "trtype": "TCP", 00:17:35.092 "adrfam": "IPv4", 00:17:35.092 "traddr": "10.0.0.1", 00:17:35.092 "trsvcid": "59452" 00:17:35.092 }, 00:17:35.093 "auth": { 00:17:35.093 "state": "completed", 00:17:35.093 "digest": "sha512", 00:17:35.093 "dhgroup": "ffdhe4096" 00:17:35.093 } 00:17:35.093 } 00:17:35.093 ]' 00:17:35.093 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.093 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.093 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.351 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:35.351 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.351 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.351 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.351 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.351 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:35.351 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:35.919 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.178 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.746 00:17:36.746 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.746 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.747 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.747 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.747 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.747 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.747 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.747 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.747 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.747 { 00:17:36.747 "cntlid": 129, 00:17:36.747 "qid": 0, 00:17:36.747 "state": "enabled", 00:17:36.747 "thread": "nvmf_tgt_poll_group_000", 00:17:36.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:36.747 "listen_address": { 00:17:36.747 "trtype": "TCP", 00:17:36.747 "adrfam": "IPv4", 00:17:36.747 "traddr": "10.0.0.2", 00:17:36.747 "trsvcid": "4420" 00:17:36.747 }, 00:17:36.747 "peer_address": { 00:17:36.747 "trtype": "TCP", 00:17:36.747 "adrfam": "IPv4", 00:17:36.747 "traddr": "10.0.0.1", 00:17:36.747 "trsvcid": "59470" 00:17:36.747 }, 00:17:36.747 "auth": { 00:17:36.747 "state": "completed", 00:17:36.747 "digest": "sha512", 00:17:36.747 "dhgroup": "ffdhe6144" 00:17:36.747 } 00:17:36.747 } 00:17:36.747 ]' 00:17:36.747 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.747 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.006 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.006 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:37.006 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.006 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.006 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.006 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.264 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:17:37.265 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:17:37.832 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.832 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:37.832 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.832 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.833 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.833 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.833 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:37.833 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:37.833 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:37.833 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.833 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:37.833 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:37.833 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:37.833 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.833 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.833 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.833 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.833 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.833 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.833 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.833 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.401 00:17:38.401 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.401 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.401 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.401 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.401 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.401 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.401 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.401 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.401 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.401 { 00:17:38.401 "cntlid": 131, 00:17:38.401 "qid": 0, 00:17:38.401 "state": "enabled", 00:17:38.401 "thread": "nvmf_tgt_poll_group_000", 00:17:38.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:38.401 "listen_address": { 00:17:38.401 "trtype": "TCP", 00:17:38.401 "adrfam": "IPv4", 00:17:38.401 "traddr": "10.0.0.2", 00:17:38.401 "trsvcid": "4420" 00:17:38.401 }, 00:17:38.401 "peer_address": { 00:17:38.401 "trtype": "TCP", 00:17:38.401 "adrfam": "IPv4", 00:17:38.401 "traddr": "10.0.0.1", 00:17:38.401 "trsvcid": "59494" 00:17:38.401 }, 00:17:38.401 "auth": { 00:17:38.401 "state": "completed", 00:17:38.401 "digest": "sha512", 00:17:38.401 "dhgroup": "ffdhe6144" 00:17:38.401 } 00:17:38.401 } 00:17:38.401 ]' 00:17:38.401 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.660 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.660 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.660 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:38.660 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.660 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.660 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.660 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.919 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:17:38.919 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:17:39.487 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.487 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:39.487 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.487 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.487 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.487 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.487 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:39.487 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:39.487 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:39.487 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.487 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.487 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:39.746 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:39.746 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.746 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.746 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.746 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.746 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.746 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.746 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.746 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.005 00:17:40.005 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.005 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.005 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.264 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.264 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.264 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.264 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.264 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.264 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.264 { 00:17:40.264 "cntlid": 133, 00:17:40.264 "qid": 0, 00:17:40.264 "state": "enabled", 00:17:40.264 "thread": "nvmf_tgt_poll_group_000", 00:17:40.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:40.264 "listen_address": { 00:17:40.264 "trtype": "TCP", 00:17:40.264 "adrfam": "IPv4", 00:17:40.264 "traddr": "10.0.0.2", 00:17:40.264 "trsvcid": "4420" 00:17:40.264 }, 00:17:40.264 "peer_address": { 00:17:40.264 "trtype": "TCP", 00:17:40.264 "adrfam": "IPv4", 00:17:40.264 "traddr": "10.0.0.1", 00:17:40.264 "trsvcid": "59530" 00:17:40.264 }, 00:17:40.264 "auth": { 00:17:40.264 "state": "completed", 00:17:40.264 "digest": "sha512", 00:17:40.264 "dhgroup": "ffdhe6144" 00:17:40.264 } 00:17:40.264 } 00:17:40.264 ]' 00:17:40.264 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.264 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.264 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.264 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:40.264 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.264 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.264 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.264 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.523 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:17:40.523 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:17:41.089 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.089 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:41.089 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.089 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.089 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.089 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.089 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:41.089 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:41.348 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:41.348 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.348 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.348 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:41.348 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:41.348 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.348 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:41.348 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.348 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.348 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.348 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:41.348 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.348 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.607 00:17:41.607 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.607 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.607 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.866 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.866 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.866 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.866 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.866 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.866 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.866 { 00:17:41.866 "cntlid": 135, 00:17:41.866 "qid": 0, 00:17:41.866 "state": "enabled", 00:17:41.866 "thread": "nvmf_tgt_poll_group_000", 00:17:41.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:41.866 "listen_address": { 00:17:41.866 "trtype": "TCP", 00:17:41.866 "adrfam": "IPv4", 00:17:41.866 "traddr": "10.0.0.2", 00:17:41.866 "trsvcid": "4420" 00:17:41.866 }, 00:17:41.866 "peer_address": { 00:17:41.866 "trtype": "TCP", 00:17:41.866 "adrfam": "IPv4", 00:17:41.866 "traddr": "10.0.0.1", 00:17:41.866 "trsvcid": "59536" 00:17:41.866 }, 00:17:41.866 "auth": { 00:17:41.866 "state": "completed", 00:17:41.866 "digest": "sha512", 00:17:41.866 "dhgroup": "ffdhe6144" 00:17:41.866 } 00:17:41.866 } 00:17:41.866 ]' 00:17:41.866 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.866 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.866 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.866 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:41.866 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.125 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.125 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.125 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.125 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:42.125 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:42.691 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.691 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:42.691 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.691 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.949 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.949 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.949 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.950 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:42.950 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:42.950 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:42.950 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.950 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.950 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:42.950 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:42.950 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.950 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.950 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.950 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.950 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.950 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.950 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.950 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.517 00:17:43.517 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.517 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.517 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.776 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.776 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.776 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.776 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.776 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.776 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.776 { 00:17:43.776 "cntlid": 137, 00:17:43.776 "qid": 0, 00:17:43.776 "state": "enabled", 00:17:43.776 "thread": "nvmf_tgt_poll_group_000", 00:17:43.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:43.776 "listen_address": { 00:17:43.776 "trtype": "TCP", 00:17:43.776 "adrfam": "IPv4", 00:17:43.776 "traddr": "10.0.0.2", 00:17:43.776 "trsvcid": "4420" 00:17:43.776 }, 00:17:43.776 "peer_address": { 00:17:43.776 "trtype": "TCP", 00:17:43.776 "adrfam": "IPv4", 00:17:43.776 "traddr": "10.0.0.1", 00:17:43.776 "trsvcid": "59560" 00:17:43.776 }, 00:17:43.776 "auth": { 00:17:43.776 "state": "completed", 00:17:43.776 "digest": "sha512", 00:17:43.776 "dhgroup": "ffdhe8192" 00:17:43.776 } 00:17:43.776 } 00:17:43.776 ]' 00:17:43.776 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.776 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.776 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.776 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.776 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.776 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.776 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.776 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.035 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:17:44.035 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:17:44.603 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.603 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:44.603 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.603 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.603 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.603 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.603 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:44.603 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:44.863 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:44.863 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.863 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.863 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:44.863 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:44.863 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.863 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.863 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.863 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.863 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.863 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.863 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.863 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.430 00:17:45.430 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.430 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.430 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.430 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.430 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.430 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.430 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.430 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.430 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.430 { 00:17:45.430 "cntlid": 139, 00:17:45.430 "qid": 0, 00:17:45.430 "state": "enabled", 00:17:45.430 "thread": "nvmf_tgt_poll_group_000", 00:17:45.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:45.430 "listen_address": { 00:17:45.430 "trtype": "TCP", 00:17:45.430 "adrfam": "IPv4", 00:17:45.430 "traddr": "10.0.0.2", 00:17:45.430 "trsvcid": "4420" 00:17:45.430 }, 00:17:45.430 "peer_address": { 00:17:45.430 "trtype": "TCP", 00:17:45.430 "adrfam": "IPv4", 00:17:45.430 "traddr": "10.0.0.1", 00:17:45.430 "trsvcid": "44952" 00:17:45.430 }, 00:17:45.430 "auth": { 00:17:45.430 "state": "completed", 00:17:45.430 "digest": "sha512", 00:17:45.430 "dhgroup": "ffdhe8192" 00:17:45.430 } 00:17:45.430 } 00:17:45.430 ]' 00:17:45.430 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.690 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.690 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.690 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:45.690 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.690 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.690 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.690 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.949 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:17:45.949 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: --dhchap-ctrl-secret DHHC-1:02:MzkwYzcxOGVmOTg5NWFiNTdiNTc5OTkyMDczMjA4ZTMyZmMwNzM1ZDUzNzc0Mjc3cMIesQ==: 00:17:46.517 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.517 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:46.517 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.517 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.517 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.517 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.517 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:46.517 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:46.776 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:46.776 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.776 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.776 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:46.776 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:46.776 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.776 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.776 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.776 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.776 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.776 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.776 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.776 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.035 00:17:47.035 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.035 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.035 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.293 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.293 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.293 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.293 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.293 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.293 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.294 { 00:17:47.294 "cntlid": 141, 00:17:47.294 "qid": 0, 00:17:47.294 "state": "enabled", 00:17:47.294 "thread": "nvmf_tgt_poll_group_000", 00:17:47.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:47.294 "listen_address": { 00:17:47.294 "trtype": "TCP", 00:17:47.294 "adrfam": "IPv4", 00:17:47.294 "traddr": "10.0.0.2", 00:17:47.294 "trsvcid": "4420" 00:17:47.294 }, 00:17:47.294 "peer_address": { 00:17:47.294 "trtype": "TCP", 00:17:47.294 "adrfam": "IPv4", 00:17:47.294 "traddr": "10.0.0.1", 00:17:47.294 "trsvcid": "44974" 00:17:47.294 }, 00:17:47.294 "auth": { 00:17:47.294 "state": "completed", 00:17:47.294 "digest": "sha512", 00:17:47.294 "dhgroup": "ffdhe8192" 00:17:47.294 } 00:17:47.294 } 00:17:47.294 ]' 00:17:47.294 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.294 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.294 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.553 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:47.553 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.553 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.553 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.553 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.811 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:17:47.811 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:01:MTQ4MjVkMzE4ZWYzMmQxYjdmZjE1OTE0Zjk2ODYyNjKEEPkO: 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.379 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.947 00:17:48.947 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.947 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.947 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.206 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.206 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.206 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.206 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.206 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.206 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.206 { 00:17:49.206 "cntlid": 143, 00:17:49.206 "qid": 0, 00:17:49.206 "state": "enabled", 00:17:49.206 "thread": "nvmf_tgt_poll_group_000", 00:17:49.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:49.206 "listen_address": { 00:17:49.206 "trtype": "TCP", 00:17:49.206 "adrfam": "IPv4", 00:17:49.206 "traddr": "10.0.0.2", 00:17:49.206 "trsvcid": "4420" 00:17:49.206 }, 00:17:49.206 "peer_address": { 00:17:49.206 "trtype": "TCP", 00:17:49.206 "adrfam": "IPv4", 00:17:49.206 "traddr": "10.0.0.1", 00:17:49.206 "trsvcid": "45010" 00:17:49.206 }, 00:17:49.206 "auth": { 00:17:49.206 "state": "completed", 00:17:49.206 "digest": "sha512", 00:17:49.206 "dhgroup": "ffdhe8192" 00:17:49.206 } 00:17:49.206 } 00:17:49.206 ]' 00:17:49.206 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.206 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.206 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.206 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:49.206 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.206 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.206 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.206 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.464 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:49.465 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:50.032 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.032 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:50.032 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.032 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.032 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.032 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:50.032 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:50.032 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:50.032 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:50.032 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:50.032 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:50.291 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:50.291 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.291 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.291 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:50.291 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:50.291 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.291 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.291 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.291 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.291 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.291 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.291 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.291 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.859 00:17:50.859 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.859 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.859 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.859 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.859 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.859 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.859 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.859 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.859 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.859 { 00:17:50.859 "cntlid": 145, 00:17:50.859 "qid": 0, 00:17:50.859 "state": "enabled", 00:17:50.859 "thread": "nvmf_tgt_poll_group_000", 00:17:50.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:50.859 "listen_address": { 00:17:50.859 "trtype": "TCP", 00:17:50.859 "adrfam": "IPv4", 00:17:50.859 "traddr": "10.0.0.2", 00:17:50.859 "trsvcid": "4420" 00:17:50.859 }, 00:17:50.859 "peer_address": { 00:17:50.859 "trtype": "TCP", 00:17:50.859 "adrfam": "IPv4", 00:17:50.859 "traddr": "10.0.0.1", 00:17:50.859 "trsvcid": "45038" 00:17:50.859 }, 00:17:50.859 "auth": { 00:17:50.859 "state": "completed", 00:17:50.859 "digest": "sha512", 00:17:50.859 "dhgroup": "ffdhe8192" 00:17:50.859 } 00:17:50.859 } 00:17:50.859 ]' 00:17:50.859 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.117 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.117 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.117 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:51.117 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.117 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.117 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.117 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.376 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:17:51.376 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGUwN2NmYzVjMDNhNmIxZWZkZjJkZTcyMjEzN2RhMzBkOWRjNTU3NTkxNDgwY2Mx6xbRdQ==: --dhchap-ctrl-secret DHHC-1:03:ZWI2NDNjNDVlMDVhNzE4YTQyODIwM2ZkMGNiNWI5NTRiMzlmMGYyMmNkN2JkYjFhMDRkMGUzYmI2NDdkZDY2MPV4KLU=: 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:51.944 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:52.201 request: 00:17:52.201 { 00:17:52.201 "name": "nvme0", 00:17:52.201 "trtype": "tcp", 00:17:52.201 "traddr": "10.0.0.2", 00:17:52.201 "adrfam": "ipv4", 00:17:52.201 "trsvcid": "4420", 00:17:52.201 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:52.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:52.201 "prchk_reftag": false, 00:17:52.201 "prchk_guard": false, 00:17:52.201 "hdgst": false, 00:17:52.201 "ddgst": false, 00:17:52.201 "dhchap_key": "key2", 00:17:52.201 "allow_unrecognized_csi": false, 00:17:52.201 "method": "bdev_nvme_attach_controller", 00:17:52.201 "req_id": 1 00:17:52.201 } 00:17:52.201 Got JSON-RPC error response 00:17:52.201 response: 00:17:52.201 { 00:17:52.201 "code": -5, 00:17:52.201 "message": "Input/output error" 00:17:52.201 } 00:17:52.201 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:52.201 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:52.201 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:52.201 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:52.201 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:52.201 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.201 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.460 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.460 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.460 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.460 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.460 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.460 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:52.460 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:52.460 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:52.460 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:52.460 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.460 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:52.460 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.460 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:52.460 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:52.460 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:52.719 request: 00:17:52.719 { 00:17:52.719 "name": "nvme0", 00:17:52.719 "trtype": "tcp", 00:17:52.719 "traddr": "10.0.0.2", 00:17:52.719 "adrfam": "ipv4", 00:17:52.719 "trsvcid": "4420", 00:17:52.719 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:52.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:52.719 "prchk_reftag": false, 00:17:52.719 "prchk_guard": false, 00:17:52.719 "hdgst": false, 00:17:52.719 "ddgst": false, 00:17:52.719 "dhchap_key": "key1", 00:17:52.719 "dhchap_ctrlr_key": "ckey2", 00:17:52.719 "allow_unrecognized_csi": false, 00:17:52.719 "method": "bdev_nvme_attach_controller", 00:17:52.719 "req_id": 1 00:17:52.719 } 00:17:52.719 Got JSON-RPC error response 00:17:52.719 response: 00:17:52.719 { 00:17:52.719 "code": -5, 00:17:52.719 "message": "Input/output error" 00:17:52.719 } 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.719 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.287 request: 00:17:53.287 { 00:17:53.287 "name": "nvme0", 00:17:53.287 "trtype": "tcp", 00:17:53.287 "traddr": "10.0.0.2", 00:17:53.287 "adrfam": "ipv4", 00:17:53.287 "trsvcid": "4420", 00:17:53.287 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:53.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:53.287 "prchk_reftag": false, 00:17:53.287 "prchk_guard": false, 00:17:53.287 "hdgst": false, 00:17:53.287 "ddgst": false, 00:17:53.287 "dhchap_key": "key1", 00:17:53.287 "dhchap_ctrlr_key": "ckey1", 00:17:53.287 "allow_unrecognized_csi": false, 00:17:53.287 "method": "bdev_nvme_attach_controller", 00:17:53.287 "req_id": 1 00:17:53.287 } 00:17:53.287 Got JSON-RPC error response 00:17:53.287 response: 00:17:53.287 { 00:17:53.287 "code": -5, 00:17:53.287 "message": "Input/output error" 00:17:53.287 } 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 58995 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 58995 ']' 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 58995 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58995 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58995' 00:17:53.287 killing process with pid 58995 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 58995 00:17:53.287 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 58995 00:17:53.546 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:53.546 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:53.546 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.546 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.546 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=81124 00:17:53.546 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 81124 00:17:53.546 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:53.546 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81124 ']' 00:17:53.546 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.546 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.546 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.546 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.546 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 81124 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81124 ']' 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.805 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.064 null0 00:17:54.064 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.064 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RzP 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.AUN ]] 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AUN 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.NRn 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.4lM ]] 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4lM 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.OIp 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.f3V ]] 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.f3V 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.mMe 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.065 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.002 nvme0n1 00:17:55.002 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.002 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.002 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.002 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.002 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.002 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.002 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.002 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.002 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.002 { 00:17:55.002 "cntlid": 1, 00:17:55.002 "qid": 0, 00:17:55.002 "state": "enabled", 00:17:55.002 "thread": "nvmf_tgt_poll_group_000", 00:17:55.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:55.002 "listen_address": { 00:17:55.002 "trtype": "TCP", 00:17:55.002 "adrfam": "IPv4", 00:17:55.002 "traddr": "10.0.0.2", 00:17:55.002 "trsvcid": "4420" 00:17:55.002 }, 00:17:55.002 "peer_address": { 00:17:55.002 "trtype": "TCP", 00:17:55.002 "adrfam": "IPv4", 00:17:55.002 "traddr": "10.0.0.1", 00:17:55.002 "trsvcid": "45092" 00:17:55.002 }, 00:17:55.002 "auth": { 00:17:55.002 "state": "completed", 00:17:55.002 "digest": "sha512", 00:17:55.002 "dhgroup": "ffdhe8192" 00:17:55.002 } 00:17:55.002 } 00:17:55.002 ]' 00:17:55.002 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.002 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.002 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.261 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:55.261 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.261 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.261 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.261 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.519 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:55.519 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.087 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.346 request: 00:17:56.346 { 00:17:56.346 "name": "nvme0", 00:17:56.346 "trtype": "tcp", 00:17:56.346 "traddr": "10.0.0.2", 00:17:56.346 "adrfam": "ipv4", 00:17:56.346 "trsvcid": "4420", 00:17:56.346 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:56.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:56.346 "prchk_reftag": false, 00:17:56.346 "prchk_guard": false, 00:17:56.346 "hdgst": false, 00:17:56.346 "ddgst": false, 00:17:56.346 "dhchap_key": "key3", 00:17:56.346 "allow_unrecognized_csi": false, 00:17:56.346 "method": "bdev_nvme_attach_controller", 00:17:56.346 "req_id": 1 00:17:56.346 } 00:17:56.346 Got JSON-RPC error response 00:17:56.346 response: 00:17:56.346 { 00:17:56.346 "code": -5, 00:17:56.346 "message": "Input/output error" 00:17:56.346 } 00:17:56.346 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:56.346 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:56.346 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:56.346 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:56.346 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:56.346 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:56.346 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:56.346 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:56.605 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:56.605 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:56.605 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:56.605 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:56.605 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:56.605 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:56.605 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:56.605 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:56.605 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.605 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.864 request: 00:17:56.864 { 00:17:56.864 "name": "nvme0", 00:17:56.864 "trtype": "tcp", 00:17:56.864 "traddr": "10.0.0.2", 00:17:56.864 "adrfam": "ipv4", 00:17:56.864 "trsvcid": "4420", 00:17:56.864 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:56.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:56.864 "prchk_reftag": false, 00:17:56.864 "prchk_guard": false, 00:17:56.864 "hdgst": false, 00:17:56.864 "ddgst": false, 00:17:56.864 "dhchap_key": "key3", 00:17:56.864 "allow_unrecognized_csi": false, 00:17:56.864 "method": "bdev_nvme_attach_controller", 00:17:56.864 "req_id": 1 00:17:56.864 } 00:17:56.864 Got JSON-RPC error response 00:17:56.864 response: 00:17:56.864 { 00:17:56.864 "code": -5, 00:17:56.864 "message": "Input/output error" 00:17:56.864 } 00:17:56.864 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:56.864 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:56.864 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:56.864 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:56.864 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:56.864 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:56.864 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:56.864 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:56.864 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:56.864 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:57.123 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:57.123 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.123 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.123 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.123 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:57.123 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.123 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.123 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.123 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:57.123 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:57.123 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:57.123 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:57.123 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.123 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:57.123 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.123 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:57.123 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:57.123 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:57.382 request: 00:17:57.382 { 00:17:57.382 "name": "nvme0", 00:17:57.382 "trtype": "tcp", 00:17:57.382 "traddr": "10.0.0.2", 00:17:57.382 "adrfam": "ipv4", 00:17:57.382 "trsvcid": "4420", 00:17:57.382 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:57.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:57.382 "prchk_reftag": false, 00:17:57.382 "prchk_guard": false, 00:17:57.382 "hdgst": false, 00:17:57.382 "ddgst": false, 00:17:57.382 "dhchap_key": "key0", 00:17:57.382 "dhchap_ctrlr_key": "key1", 00:17:57.382 "allow_unrecognized_csi": false, 00:17:57.382 "method": "bdev_nvme_attach_controller", 00:17:57.382 "req_id": 1 00:17:57.382 } 00:17:57.382 Got JSON-RPC error response 00:17:57.382 response: 00:17:57.382 { 00:17:57.382 "code": -5, 00:17:57.382 "message": "Input/output error" 00:17:57.382 } 00:17:57.382 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:57.382 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:57.382 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:57.382 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:57.382 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:57.382 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:57.382 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:57.641 nvme0n1 00:17:57.641 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:57.641 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:57.641 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.899 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.900 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.900 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.158 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:17:58.158 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.158 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.158 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.158 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:58.158 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:58.158 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:58.725 nvme0n1 00:17:58.726 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:58.726 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.726 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:58.985 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.985 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:58.985 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.985 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.985 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.985 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:58.985 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:58.985 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.243 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.244 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:59.244 09:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: --dhchap-ctrl-secret DHHC-1:03:ZWY3YjM2MzU0NmMxY2ZmYzUwMTI3MTg3OTYwODEzZDk1MDNlMWQ1ZGViNDdiMDU3MzgzMzExYzE4MDA3ODgwMEnG/WY=: 00:17:59.811 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:59.811 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:59.811 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:59.811 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:59.811 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:59.811 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:59.811 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:59.811 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.811 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.070 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:00.070 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:00.070 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:00.070 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:00.070 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:00.070 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:00.070 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:00.070 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:00.070 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:00.070 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:00.329 request: 00:18:00.329 { 00:18:00.329 "name": "nvme0", 00:18:00.329 "trtype": "tcp", 00:18:00.329 "traddr": "10.0.0.2", 00:18:00.329 "adrfam": "ipv4", 00:18:00.329 "trsvcid": "4420", 00:18:00.329 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:00.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:18:00.329 "prchk_reftag": false, 00:18:00.329 "prchk_guard": false, 00:18:00.329 "hdgst": false, 00:18:00.329 "ddgst": false, 00:18:00.329 "dhchap_key": "key1", 00:18:00.329 "allow_unrecognized_csi": false, 00:18:00.329 "method": "bdev_nvme_attach_controller", 00:18:00.329 "req_id": 1 00:18:00.329 } 00:18:00.329 Got JSON-RPC error response 00:18:00.329 response: 00:18:00.329 { 00:18:00.329 "code": -5, 00:18:00.329 "message": "Input/output error" 00:18:00.329 } 00:18:00.329 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:00.329 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:00.329 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:00.329 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:00.329 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:00.329 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:00.329 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:01.266 nvme0n1 00:18:01.266 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:01.266 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:01.266 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.266 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.266 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.266 09:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.524 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:01.524 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.524 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.524 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.525 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:01.525 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:01.525 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:01.783 nvme0n1 00:18:01.783 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:01.783 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:01.784 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.041 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.041 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.041 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.300 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:02.300 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.300 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.300 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.300 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: '' 2s 00:18:02.300 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:02.300 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:02.300 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: 00:18:02.300 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:02.300 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:02.300 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:02.300 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: ]] 00:18:02.300 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZTU4ODE1ODYxYTViZTJhMWY0NGFhMzVhN2VlNThiZDJviFy3: 00:18:02.300 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:02.300 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:02.300 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: 2s 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: ]] 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YzNjYjUyNzNjYWQ5MGM3MzE4YzhiODAwMjFmOWY5YzJkNjNlYzhhNmYzNGU1Y2Y4xgl1vA==: 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:04.205 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:06.738 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:06.738 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:06.738 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:06.738 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:06.738 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:06.738 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:06.738 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:06.738 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.738 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:06.738 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.738 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.738 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.738 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:06.738 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:06.738 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:06.996 nvme0n1 00:18:06.996 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:06.996 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.996 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.996 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.996 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:06.996 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:07.564 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:07.564 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:07.564 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.823 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.823 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:07.823 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.823 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.823 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.823 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:07.823 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:08.082 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:08.082 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:08.082 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.082 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.082 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:08.082 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.082 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.082 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.082 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:08.082 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:08.082 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:08.082 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:08.082 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.082 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:08.082 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.082 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:08.082 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:08.650 request: 00:18:08.650 { 00:18:08.650 "name": "nvme0", 00:18:08.650 "dhchap_key": "key1", 00:18:08.650 "dhchap_ctrlr_key": "key3", 00:18:08.650 "method": "bdev_nvme_set_keys", 00:18:08.650 "req_id": 1 00:18:08.650 } 00:18:08.650 Got JSON-RPC error response 00:18:08.650 response: 00:18:08.650 { 00:18:08.650 "code": -13, 00:18:08.650 "message": "Permission denied" 00:18:08.650 } 00:18:08.650 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:08.650 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.650 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.650 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.650 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:08.650 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:08.650 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.908 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:08.908 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:09.845 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:09.845 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.845 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:10.104 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:10.104 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:10.104 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.104 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.104 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.104 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:10.104 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:10.104 09:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:10.672 nvme0n1 00:18:10.672 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:10.672 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.672 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.931 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.931 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:10.931 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:10.931 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:10.931 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:10.931 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.931 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:10.931 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.931 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:10.931 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:11.189 request: 00:18:11.190 { 00:18:11.190 "name": "nvme0", 00:18:11.190 "dhchap_key": "key2", 00:18:11.190 "dhchap_ctrlr_key": "key0", 00:18:11.190 "method": "bdev_nvme_set_keys", 00:18:11.190 "req_id": 1 00:18:11.190 } 00:18:11.190 Got JSON-RPC error response 00:18:11.190 response: 00:18:11.190 { 00:18:11.190 "code": -13, 00:18:11.190 "message": "Permission denied" 00:18:11.190 } 00:18:11.190 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:11.190 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.190 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.190 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.190 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:11.190 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.190 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:11.451 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:11.451 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:12.523 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:12.523 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:12.523 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.782 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:12.782 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:12.782 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:12.782 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 59199 00:18:12.782 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 59199 ']' 00:18:12.782 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 59199 00:18:12.782 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:12.782 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.782 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59199 00:18:12.782 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:12.782 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:12.782 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59199' 00:18:12.782 killing process with pid 59199 00:18:12.782 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 59199 00:18:12.782 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 59199 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:13.041 rmmod nvme_tcp 00:18:13.041 rmmod nvme_fabrics 00:18:13.041 rmmod nvme_keyring 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 81124 ']' 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 81124 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 81124 ']' 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 81124 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81124 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81124' 00:18:13.041 killing process with pid 81124 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 81124 00:18:13.041 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 81124 00:18:13.301 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:13.301 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:13.301 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:13.301 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:13.301 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:13.301 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:13.301 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:13.301 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:13.301 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:13.301 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.301 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.301 09:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.834 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:15.834 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.RzP /tmp/spdk.key-sha256.NRn /tmp/spdk.key-sha384.OIp /tmp/spdk.key-sha512.mMe /tmp/spdk.key-sha512.AUN /tmp/spdk.key-sha384.4lM /tmp/spdk.key-sha256.f3V '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:15.834 00:18:15.834 real 2m33.563s 00:18:15.834 user 5m52.079s 00:18:15.834 sys 0m25.028s 00:18:15.834 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.834 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.834 ************************************ 00:18:15.834 END TEST nvmf_auth_target 00:18:15.834 ************************************ 00:18:15.834 09:56:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:15.834 09:56:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:15.834 09:56:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:15.834 09:56:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.834 09:56:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:15.834 ************************************ 00:18:15.834 START TEST nvmf_bdevio_no_huge 00:18:15.834 ************************************ 00:18:15.834 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:15.834 * Looking for test storage... 00:18:15.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:15.834 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:15.834 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:18:15.834 09:56:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:15.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.834 --rc genhtml_branch_coverage=1 00:18:15.834 --rc genhtml_function_coverage=1 00:18:15.834 --rc genhtml_legend=1 00:18:15.834 --rc geninfo_all_blocks=1 00:18:15.834 --rc geninfo_unexecuted_blocks=1 00:18:15.834 00:18:15.834 ' 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:15.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.834 --rc genhtml_branch_coverage=1 00:18:15.834 --rc genhtml_function_coverage=1 00:18:15.834 --rc genhtml_legend=1 00:18:15.834 --rc geninfo_all_blocks=1 00:18:15.834 --rc geninfo_unexecuted_blocks=1 00:18:15.834 00:18:15.834 ' 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:15.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.834 --rc genhtml_branch_coverage=1 00:18:15.834 --rc genhtml_function_coverage=1 00:18:15.834 --rc genhtml_legend=1 00:18:15.834 --rc geninfo_all_blocks=1 00:18:15.834 --rc geninfo_unexecuted_blocks=1 00:18:15.834 00:18:15.834 ' 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:15.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.834 --rc genhtml_branch_coverage=1 00:18:15.834 --rc genhtml_function_coverage=1 00:18:15.834 --rc genhtml_legend=1 00:18:15.834 --rc geninfo_all_blocks=1 00:18:15.834 --rc geninfo_unexecuted_blocks=1 00:18:15.834 00:18:15.834 ' 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.834 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:15.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:15.835 09:56:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:22.405 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:22.405 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:22.405 Found net devices under 0000:af:00.0: cvl_0_0 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:22.405 Found net devices under 0000:af:00.1: cvl_0_1 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:22.405 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:22.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:18:22.406 00:18:22.406 --- 10.0.0.2 ping statistics --- 00:18:22.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.406 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:22.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:18:22.406 00:18:22.406 --- 10.0.0.1 ping statistics --- 00:18:22.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.406 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=88283 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 88283 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 88283 ']' 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.406 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:22.406 [2024-12-11 09:56:31.913852] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:18:22.406 [2024-12-11 09:56:31.913898] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:22.664 [2024-12-11 09:56:32.001994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:22.664 [2024-12-11 09:56:32.047950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.664 [2024-12-11 09:56:32.047983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.664 [2024-12-11 09:56:32.047990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.664 [2024-12-11 09:56:32.047997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.664 [2024-12-11 09:56:32.048003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.664 [2024-12-11 09:56:32.049195] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:18:22.664 [2024-12-11 09:56:32.049324] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:18:22.664 [2024-12-11 09:56:32.049430] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:22.664 [2024-12-11 09:56:32.049430] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:18:23.228 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.228 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:23.228 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:23.228 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:23.228 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:23.487 [2024-12-11 09:56:32.814220] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:23.487 Malloc0 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:23.487 [2024-12-11 09:56:32.858474] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:23.487 { 00:18:23.487 "params": { 00:18:23.487 "name": "Nvme$subsystem", 00:18:23.487 "trtype": "$TEST_TRANSPORT", 00:18:23.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.487 "adrfam": "ipv4", 00:18:23.487 "trsvcid": "$NVMF_PORT", 00:18:23.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.487 "hdgst": ${hdgst:-false}, 00:18:23.487 "ddgst": ${ddgst:-false} 00:18:23.487 }, 00:18:23.487 "method": "bdev_nvme_attach_controller" 00:18:23.487 } 00:18:23.487 EOF 00:18:23.487 )") 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:23.487 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:23.487 "params": { 00:18:23.487 "name": "Nvme1", 00:18:23.487 "trtype": "tcp", 00:18:23.487 "traddr": "10.0.0.2", 00:18:23.487 "adrfam": "ipv4", 00:18:23.487 "trsvcid": "4420", 00:18:23.487 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.487 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:23.487 "hdgst": false, 00:18:23.487 "ddgst": false 00:18:23.487 }, 00:18:23.487 "method": "bdev_nvme_attach_controller" 00:18:23.487 }' 00:18:23.487 [2024-12-11 09:56:32.912121] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:18:23.487 [2024-12-11 09:56:32.912168] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid88527 ] 00:18:23.487 [2024-12-11 09:56:32.998283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:23.487 [2024-12-11 09:56:33.046123] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.487 [2024-12-11 09:56:33.046243] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.487 [2024-12-11 09:56:33.046244] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.745 I/O targets: 00:18:23.745 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:23.745 00:18:23.745 00:18:23.745 CUnit - A unit testing framework for C - Version 2.1-3 00:18:23.745 http://cunit.sourceforge.net/ 00:18:23.745 00:18:23.745 00:18:23.745 Suite: bdevio tests on: Nvme1n1 00:18:23.745 Test: blockdev write read block ...passed 00:18:23.745 Test: blockdev write zeroes read block ...passed 00:18:23.745 Test: blockdev write zeroes read no split ...passed 00:18:23.745 Test: blockdev write zeroes read split ...passed 00:18:24.002 Test: blockdev write zeroes read split partial ...passed 00:18:24.002 Test: blockdev reset ...[2024-12-11 09:56:33.330908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:24.002 [2024-12-11 09:56:33.330981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a70f00 (9): Bad file descriptor 00:18:24.002 [2024-12-11 09:56:33.346338] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:24.002 passed 00:18:24.002 Test: blockdev write read 8 blocks ...passed 00:18:24.002 Test: blockdev write read size > 128k ...passed 00:18:24.002 Test: blockdev write read invalid size ...passed 00:18:24.002 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:24.002 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:24.002 Test: blockdev write read max offset ...passed 00:18:24.002 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:24.002 Test: blockdev writev readv 8 blocks ...passed 00:18:24.002 Test: blockdev writev readv 30 x 1block ...passed 00:18:24.002 Test: blockdev writev readv block ...passed 00:18:24.002 Test: blockdev writev readv size > 128k ...passed 00:18:24.002 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:24.002 Test: blockdev comparev and writev ...[2024-12-11 09:56:33.555929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:24.002 [2024-12-11 09:56:33.555956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.002 [2024-12-11 09:56:33.555973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:24.002 [2024-12-11 09:56:33.555981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.002 [2024-12-11 09:56:33.556210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:24.002 [2024-12-11 09:56:33.556223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:24.002 [2024-12-11 09:56:33.556235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:24.003 [2024-12-11 09:56:33.556242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:24.003 [2024-12-11 09:56:33.556467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:24.003 [2024-12-11 09:56:33.556477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:24.003 [2024-12-11 09:56:33.556488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:24.003 [2024-12-11 09:56:33.556495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:24.003 [2024-12-11 09:56:33.556709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:24.003 [2024-12-11 09:56:33.556719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:24.003 [2024-12-11 09:56:33.556729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:24.003 [2024-12-11 09:56:33.556737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:24.260 passed 00:18:24.260 Test: blockdev nvme passthru rw ...passed 00:18:24.260 Test: blockdev nvme passthru vendor specific ...[2024-12-11 09:56:33.638554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:24.260 [2024-12-11 09:56:33.638569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:24.260 [2024-12-11 09:56:33.638674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:24.260 [2024-12-11 09:56:33.638683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:24.260 [2024-12-11 09:56:33.638783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:24.260 [2024-12-11 09:56:33.638792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:24.260 [2024-12-11 09:56:33.638896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:24.260 [2024-12-11 09:56:33.638906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:24.260 passed 00:18:24.260 Test: blockdev nvme admin passthru ...passed 00:18:24.260 Test: blockdev copy ...passed 00:18:24.260 00:18:24.260 Run Summary: Type Total Ran Passed Failed Inactive 00:18:24.260 suites 1 1 n/a 0 0 00:18:24.260 tests 23 23 23 0 0 00:18:24.260 asserts 152 152 152 0 n/a 00:18:24.260 00:18:24.260 Elapsed time = 0.979 seconds 00:18:24.518 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:24.518 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.518 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:24.518 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.518 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:24.518 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:24.518 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:24.518 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:24.518 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:24.518 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:24.518 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:24.518 09:56:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:24.518 rmmod nvme_tcp 00:18:24.518 rmmod nvme_fabrics 00:18:24.518 rmmod nvme_keyring 00:18:24.518 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:24.518 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:24.518 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:24.518 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 88283 ']' 00:18:24.518 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 88283 00:18:24.518 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 88283 ']' 00:18:24.518 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 88283 00:18:24.518 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:24.518 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.518 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88283 00:18:24.518 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:24.518 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:24.518 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88283' 00:18:24.518 killing process with pid 88283 00:18:24.518 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 88283 00:18:24.518 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 88283 00:18:25.085 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:25.085 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:25.085 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:25.085 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:25.085 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:25.085 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:25.085 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:25.085 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:25.085 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:25.085 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.085 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:25.085 09:56:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.992 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:26.992 00:18:26.992 real 0m11.530s 00:18:26.992 user 0m12.912s 00:18:26.992 sys 0m5.980s 00:18:26.992 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.992 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:26.992 ************************************ 00:18:26.992 END TEST nvmf_bdevio_no_huge 00:18:26.992 ************************************ 00:18:26.992 09:56:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:26.992 09:56:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:26.992 09:56:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.992 09:56:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:26.992 ************************************ 00:18:26.992 START TEST nvmf_tls 00:18:26.992 ************************************ 00:18:26.992 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:27.251 * Looking for test storage... 00:18:27.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:27.251 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:27.251 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:18:27.251 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:27.251 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:27.251 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:27.251 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:27.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.252 --rc genhtml_branch_coverage=1 00:18:27.252 --rc genhtml_function_coverage=1 00:18:27.252 --rc genhtml_legend=1 00:18:27.252 --rc geninfo_all_blocks=1 00:18:27.252 --rc geninfo_unexecuted_blocks=1 00:18:27.252 00:18:27.252 ' 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:27.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.252 --rc genhtml_branch_coverage=1 00:18:27.252 --rc genhtml_function_coverage=1 00:18:27.252 --rc genhtml_legend=1 00:18:27.252 --rc geninfo_all_blocks=1 00:18:27.252 --rc geninfo_unexecuted_blocks=1 00:18:27.252 00:18:27.252 ' 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:27.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.252 --rc genhtml_branch_coverage=1 00:18:27.252 --rc genhtml_function_coverage=1 00:18:27.252 --rc genhtml_legend=1 00:18:27.252 --rc geninfo_all_blocks=1 00:18:27.252 --rc geninfo_unexecuted_blocks=1 00:18:27.252 00:18:27.252 ' 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:27.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.252 --rc genhtml_branch_coverage=1 00:18:27.252 --rc genhtml_function_coverage=1 00:18:27.252 --rc genhtml_legend=1 00:18:27.252 --rc geninfo_all_blocks=1 00:18:27.252 --rc geninfo_unexecuted_blocks=1 00:18:27.252 00:18:27.252 ' 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:27.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:27.252 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:27.253 09:56:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:33.821 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:33.821 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:33.822 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:33.822 Found net devices under 0000:af:00.0: cvl_0_0 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:33.822 Found net devices under 0000:af:00.1: cvl_0_1 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:33.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:33.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:18:33.822 00:18:33.822 --- 10.0.0.2 ping statistics --- 00:18:33.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.822 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:33.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:33.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:18:33.822 00:18:33.822 --- 10.0.0.1 ping statistics --- 00:18:33.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.822 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:33.822 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:34.096 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:34.096 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:34.096 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:34.096 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.096 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=92545 00:18:34.096 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 92545 00:18:34.096 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:34.096 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 92545 ']' 00:18:34.096 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.096 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.096 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.096 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.096 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.096 [2024-12-11 09:56:43.482543] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:18:34.096 [2024-12-11 09:56:43.482595] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.096 [2024-12-11 09:56:43.567055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.096 [2024-12-11 09:56:43.605080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.096 [2024-12-11 09:56:43.605114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.096 [2024-12-11 09:56:43.605122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.096 [2024-12-11 09:56:43.605128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.097 [2024-12-11 09:56:43.605134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.097 [2024-12-11 09:56:43.605657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.030 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.030 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:35.030 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:35.030 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:35.030 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.030 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.030 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:35.030 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:35.030 true 00:18:35.030 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:35.030 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:35.288 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:35.288 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:35.288 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:35.546 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:35.546 09:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:35.804 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:35.804 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:35.804 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:35.804 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:35.804 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:36.061 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:36.062 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:36.062 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:36.062 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:36.319 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:36.319 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:36.320 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:36.577 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:36.577 09:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:36.577 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:36.577 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:36.577 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:36.835 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:36.835 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.rwTZwzuK5Z 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.dKZyChZVxZ 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.rwTZwzuK5Z 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.dKZyChZVxZ 00:18:37.093 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:37.351 09:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:37.609 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.rwTZwzuK5Z 00:18:37.609 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rwTZwzuK5Z 00:18:37.609 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:37.609 [2024-12-11 09:56:47.182965] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.867 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:37.867 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:38.125 [2024-12-11 09:56:47.559926] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:38.125 [2024-12-11 09:56:47.560161] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.125 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:38.383 malloc0 00:18:38.383 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:38.383 09:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rwTZwzuK5Z 00:18:38.641 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:38.899 09:56:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.rwTZwzuK5Z 00:18:51.093 Initializing NVMe Controllers 00:18:51.093 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:51.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:51.093 Initialization complete. Launching workers. 00:18:51.093 ======================================================== 00:18:51.093 Latency(us) 00:18:51.093 Device Information : IOPS MiB/s Average min max 00:18:51.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16857.65 65.85 3796.63 863.99 6173.16 00:18:51.093 ======================================================== 00:18:51.093 Total : 16857.65 65.85 3796.63 863.99 6173.16 00:18:51.093 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rwTZwzuK5Z 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rwTZwzuK5Z 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=95087 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 95087 /var/tmp/bdevperf.sock 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 95087 ']' 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.093 [2024-12-11 09:56:58.500124] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:18:51.093 [2024-12-11 09:56:58.500172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95087 ] 00:18:51.093 [2024-12-11 09:56:58.577857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.093 [2024-12-11 09:56:58.618068] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rwTZwzuK5Z 00:18:51.093 09:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:51.093 [2024-12-11 09:56:59.064988] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:51.093 TLSTESTn1 00:18:51.093 09:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:51.093 Running I/O for 10 seconds... 00:18:52.028 5272.00 IOPS, 20.59 MiB/s [2024-12-11T08:57:02.536Z] 5388.00 IOPS, 21.05 MiB/s [2024-12-11T08:57:03.470Z] 5430.00 IOPS, 21.21 MiB/s [2024-12-11T08:57:04.421Z] 5506.00 IOPS, 21.51 MiB/s [2024-12-11T08:57:05.448Z] 5523.20 IOPS, 21.57 MiB/s [2024-12-11T08:57:06.383Z] 5535.00 IOPS, 21.62 MiB/s [2024-12-11T08:57:07.325Z] 5506.00 IOPS, 21.51 MiB/s [2024-12-11T08:57:08.698Z] 5480.88 IOPS, 21.41 MiB/s [2024-12-11T08:57:09.264Z] 5453.11 IOPS, 21.30 MiB/s [2024-12-11T08:57:09.523Z] 5458.60 IOPS, 21.32 MiB/s 00:18:59.948 Latency(us) 00:18:59.948 [2024-12-11T08:57:09.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.948 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:59.948 Verification LBA range: start 0x0 length 0x2000 00:18:59.948 TLSTESTn1 : 10.02 5462.04 21.34 0.00 0.00 23399.11 6210.32 29584.82 00:18:59.948 [2024-12-11T08:57:09.523Z] =================================================================================================================== 00:18:59.948 [2024-12-11T08:57:09.523Z] Total : 5462.04 21.34 0.00 0.00 23399.11 6210.32 29584.82 00:18:59.948 { 00:18:59.948 "results": [ 00:18:59.948 { 00:18:59.948 "job": "TLSTESTn1", 00:18:59.948 "core_mask": "0x4", 00:18:59.948 "workload": "verify", 00:18:59.948 "status": "finished", 00:18:59.948 "verify_range": { 00:18:59.948 "start": 0, 00:18:59.948 "length": 8192 00:18:59.948 }, 00:18:59.948 "queue_depth": 128, 00:18:59.948 "io_size": 4096, 00:18:59.948 "runtime": 10.017129, 00:18:59.948 "iops": 5462.044064721538, 00:18:59.948 "mibps": 21.33610962781851, 00:18:59.948 "io_failed": 0, 00:18:59.948 "io_timeout": 0, 00:18:59.948 "avg_latency_us": 23399.108176787693, 00:18:59.948 "min_latency_us": 6210.31619047619, 00:18:59.948 "max_latency_us": 29584.822857142855 00:18:59.948 } 00:18:59.948 ], 00:18:59.948 "core_count": 1 00:18:59.948 } 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 95087 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 95087 ']' 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 95087 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95087 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95087' 00:18:59.948 killing process with pid 95087 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 95087 00:18:59.948 Received shutdown signal, test time was about 10.000000 seconds 00:18:59.948 00:18:59.948 Latency(us) 00:18:59.948 [2024-12-11T08:57:09.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.948 [2024-12-11T08:57:09.523Z] =================================================================================================================== 00:18:59.948 [2024-12-11T08:57:09.523Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 95087 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dKZyChZVxZ 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dKZyChZVxZ 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dKZyChZVxZ 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dKZyChZVxZ 00:18:59.948 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:00.207 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=97335 00:19:00.207 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:00.207 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:00.207 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 97335 /var/tmp/bdevperf.sock 00:19:00.207 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 97335 ']' 00:19:00.207 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:00.207 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.207 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:00.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:00.207 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.207 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.207 [2024-12-11 09:57:09.570593] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:00.207 [2024-12-11 09:57:09.570640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97335 ] 00:19:00.207 [2024-12-11 09:57:09.649948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.207 [2024-12-11 09:57:09.690544] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.207 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.207 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:00.207 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dKZyChZVxZ 00:19:00.466 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:00.724 [2024-12-11 09:57:10.150733] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:00.724 [2024-12-11 09:57:10.157650] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:00.724 [2024-12-11 09:57:10.158014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e7770 (107): Transport endpoint is not connected 00:19:00.724 [2024-12-11 09:57:10.159008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e7770 (9): Bad file descriptor 00:19:00.724 [2024-12-11 09:57:10.160010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:00.724 [2024-12-11 09:57:10.160018] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:00.724 [2024-12-11 09:57:10.160025] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:00.724 [2024-12-11 09:57:10.160032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:00.724 request: 00:19:00.724 { 00:19:00.724 "name": "TLSTEST", 00:19:00.724 "trtype": "tcp", 00:19:00.724 "traddr": "10.0.0.2", 00:19:00.724 "adrfam": "ipv4", 00:19:00.724 "trsvcid": "4420", 00:19:00.724 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.724 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:00.724 "prchk_reftag": false, 00:19:00.724 "prchk_guard": false, 00:19:00.724 "hdgst": false, 00:19:00.724 "ddgst": false, 00:19:00.724 "psk": "key0", 00:19:00.724 "allow_unrecognized_csi": false, 00:19:00.724 "method": "bdev_nvme_attach_controller", 00:19:00.724 "req_id": 1 00:19:00.724 } 00:19:00.724 Got JSON-RPC error response 00:19:00.724 response: 00:19:00.724 { 00:19:00.724 "code": -5, 00:19:00.724 "message": "Input/output error" 00:19:00.724 } 00:19:00.724 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 97335 00:19:00.724 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 97335 ']' 00:19:00.724 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 97335 00:19:00.724 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:00.724 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.724 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97335 00:19:00.724 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:00.724 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:00.724 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97335' 00:19:00.724 killing process with pid 97335 00:19:00.724 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 97335 00:19:00.724 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.724 00:19:00.724 Latency(us) 00:19:00.724 [2024-12-11T08:57:10.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.724 [2024-12-11T08:57:10.299Z] =================================================================================================================== 00:19:00.724 [2024-12-11T08:57:10.299Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:00.724 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 97335 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rwTZwzuK5Z 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rwTZwzuK5Z 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rwTZwzuK5Z 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rwTZwzuK5Z 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=97432 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 97432 /var/tmp/bdevperf.sock 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 97432 ']' 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:00.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.982 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.982 [2024-12-11 09:57:10.429965] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:00.982 [2024-12-11 09:57:10.430013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97432 ] 00:19:00.982 [2024-12-11 09:57:10.506425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.982 [2024-12-11 09:57:10.547106] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.240 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.240 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:01.240 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rwTZwzuK5Z 00:19:01.497 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:01.497 [2024-12-11 09:57:10.990626] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:01.497 [2024-12-11 09:57:10.999698] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:01.497 [2024-12-11 09:57:10.999721] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:01.497 [2024-12-11 09:57:10.999745] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:01.497 [2024-12-11 09:57:10.999848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216d770 (107): Transport endpoint is not connected 00:19:01.497 [2024-12-11 09:57:11.000841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216d770 (9): Bad file descriptor 00:19:01.497 [2024-12-11 09:57:11.001843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:01.497 [2024-12-11 09:57:11.001851] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:01.497 [2024-12-11 09:57:11.001858] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:01.497 [2024-12-11 09:57:11.001866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:01.497 request: 00:19:01.497 { 00:19:01.497 "name": "TLSTEST", 00:19:01.497 "trtype": "tcp", 00:19:01.497 "traddr": "10.0.0.2", 00:19:01.497 "adrfam": "ipv4", 00:19:01.497 "trsvcid": "4420", 00:19:01.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.497 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:01.497 "prchk_reftag": false, 00:19:01.497 "prchk_guard": false, 00:19:01.497 "hdgst": false, 00:19:01.497 "ddgst": false, 00:19:01.497 "psk": "key0", 00:19:01.497 "allow_unrecognized_csi": false, 00:19:01.497 "method": "bdev_nvme_attach_controller", 00:19:01.497 "req_id": 1 00:19:01.497 } 00:19:01.497 Got JSON-RPC error response 00:19:01.497 response: 00:19:01.497 { 00:19:01.497 "code": -5, 00:19:01.497 "message": "Input/output error" 00:19:01.497 } 00:19:01.497 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 97432 00:19:01.497 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 97432 ']' 00:19:01.497 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 97432 00:19:01.497 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:01.497 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.497 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97432 00:19:01.755 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:01.755 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:01.755 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97432' 00:19:01.755 killing process with pid 97432 00:19:01.755 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 97432 00:19:01.755 Received shutdown signal, test time was about 10.000000 seconds 00:19:01.756 00:19:01.756 Latency(us) 00:19:01.756 [2024-12-11T08:57:11.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.756 [2024-12-11T08:57:11.331Z] =================================================================================================================== 00:19:01.756 [2024-12-11T08:57:11.331Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 97432 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rwTZwzuK5Z 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rwTZwzuK5Z 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rwTZwzuK5Z 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rwTZwzuK5Z 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=97660 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 97660 /var/tmp/bdevperf.sock 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 97660 ']' 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.756 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.756 [2024-12-11 09:57:11.280263] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:01.756 [2024-12-11 09:57:11.280311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97660 ] 00:19:02.013 [2024-12-11 09:57:11.358297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.013 [2024-12-11 09:57:11.396719] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.013 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.013 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:02.013 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rwTZwzuK5Z 00:19:02.271 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:02.528 [2024-12-11 09:57:11.852381] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.528 [2024-12-11 09:57:11.861965] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:02.528 [2024-12-11 09:57:11.861989] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:02.528 [2024-12-11 09:57:11.862029] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:02.528 [2024-12-11 09:57:11.862677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2344770 (107): Transport endpoint is not connected 00:19:02.528 [2024-12-11 09:57:11.863670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2344770 (9): Bad file descriptor 00:19:02.528 [2024-12-11 09:57:11.864671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:02.528 [2024-12-11 09:57:11.864682] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:02.528 [2024-12-11 09:57:11.864690] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:02.528 [2024-12-11 09:57:11.864698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:02.528 request: 00:19:02.528 { 00:19:02.528 "name": "TLSTEST", 00:19:02.528 "trtype": "tcp", 00:19:02.528 "traddr": "10.0.0.2", 00:19:02.528 "adrfam": "ipv4", 00:19:02.528 "trsvcid": "4420", 00:19:02.528 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:02.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.528 "prchk_reftag": false, 00:19:02.528 "prchk_guard": false, 00:19:02.528 "hdgst": false, 00:19:02.528 "ddgst": false, 00:19:02.528 "psk": "key0", 00:19:02.528 "allow_unrecognized_csi": false, 00:19:02.528 "method": "bdev_nvme_attach_controller", 00:19:02.528 "req_id": 1 00:19:02.528 } 00:19:02.528 Got JSON-RPC error response 00:19:02.528 response: 00:19:02.528 { 00:19:02.528 "code": -5, 00:19:02.528 "message": "Input/output error" 00:19:02.528 } 00:19:02.528 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 97660 00:19:02.528 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 97660 ']' 00:19:02.528 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 97660 00:19:02.528 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:02.528 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.528 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97660 00:19:02.528 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:02.528 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:02.528 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97660' 00:19:02.528 killing process with pid 97660 00:19:02.528 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 97660 00:19:02.528 Received shutdown signal, test time was about 10.000000 seconds 00:19:02.528 00:19:02.528 Latency(us) 00:19:02.528 [2024-12-11T08:57:12.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.528 [2024-12-11T08:57:12.103Z] =================================================================================================================== 00:19:02.528 [2024-12-11T08:57:12.103Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:02.528 09:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 97660 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=97847 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 97847 /var/tmp/bdevperf.sock 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 97847 ']' 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.528 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.785 [2024-12-11 09:57:12.127014] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:02.785 [2024-12-11 09:57:12.127062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97847 ] 00:19:02.785 [2024-12-11 09:57:12.206495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.785 [2024-12-11 09:57:12.246725] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.785 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.785 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:02.785 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:03.042 [2024-12-11 09:57:12.509148] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:03.042 [2024-12-11 09:57:12.509174] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:03.042 request: 00:19:03.042 { 00:19:03.042 "name": "key0", 00:19:03.042 "path": "", 00:19:03.042 "method": "keyring_file_add_key", 00:19:03.042 "req_id": 1 00:19:03.042 } 00:19:03.042 Got JSON-RPC error response 00:19:03.042 response: 00:19:03.042 { 00:19:03.042 "code": -1, 00:19:03.042 "message": "Operation not permitted" 00:19:03.043 } 00:19:03.043 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:03.300 [2024-12-11 09:57:12.689703] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:03.300 [2024-12-11 09:57:12.689734] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:03.300 request: 00:19:03.300 { 00:19:03.300 "name": "TLSTEST", 00:19:03.300 "trtype": "tcp", 00:19:03.300 "traddr": "10.0.0.2", 00:19:03.300 "adrfam": "ipv4", 00:19:03.300 "trsvcid": "4420", 00:19:03.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.300 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:03.300 "prchk_reftag": false, 00:19:03.300 "prchk_guard": false, 00:19:03.300 "hdgst": false, 00:19:03.300 "ddgst": false, 00:19:03.300 "psk": "key0", 00:19:03.300 "allow_unrecognized_csi": false, 00:19:03.300 "method": "bdev_nvme_attach_controller", 00:19:03.300 "req_id": 1 00:19:03.300 } 00:19:03.300 Got JSON-RPC error response 00:19:03.300 response: 00:19:03.300 { 00:19:03.300 "code": -126, 00:19:03.300 "message": "Required key not available" 00:19:03.300 } 00:19:03.300 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 97847 00:19:03.300 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 97847 ']' 00:19:03.300 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 97847 00:19:03.300 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:03.300 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.300 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97847 00:19:03.300 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:03.300 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:03.300 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97847' 00:19:03.300 killing process with pid 97847 00:19:03.300 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 97847 00:19:03.300 Received shutdown signal, test time was about 10.000000 seconds 00:19:03.300 00:19:03.300 Latency(us) 00:19:03.300 [2024-12-11T08:57:12.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.300 [2024-12-11T08:57:12.875Z] =================================================================================================================== 00:19:03.300 [2024-12-11T08:57:12.875Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:03.300 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 97847 00:19:03.558 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:03.558 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:03.558 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:03.558 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:03.558 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:03.558 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 92545 00:19:03.558 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 92545 ']' 00:19:03.558 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 92545 00:19:03.558 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:03.558 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.558 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92545 00:19:03.558 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:03.558 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:03.558 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92545' 00:19:03.558 killing process with pid 92545 00:19:03.558 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 92545 00:19:03.558 09:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 92545 00:19:03.816 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.UcyKdYLWBg 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.UcyKdYLWBg 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=97931 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 97931 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 97931 ']' 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.817 09:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.817 [2024-12-11 09:57:13.249989] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:03.817 [2024-12-11 09:57:13.250039] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.817 [2024-12-11 09:57:13.335941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.817 [2024-12-11 09:57:13.376566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.817 [2024-12-11 09:57:13.376599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.817 [2024-12-11 09:57:13.376607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.817 [2024-12-11 09:57:13.376614] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.817 [2024-12-11 09:57:13.376619] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.817 [2024-12-11 09:57:13.377139] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.749 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.749 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:04.749 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:04.749 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:04.749 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.749 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.750 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.UcyKdYLWBg 00:19:04.750 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UcyKdYLWBg 00:19:04.750 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:04.750 [2024-12-11 09:57:14.283003] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.750 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:05.007 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:05.265 [2024-12-11 09:57:14.684028] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:05.265 [2024-12-11 09:57:14.684249] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.265 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:05.522 malloc0 00:19:05.522 09:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:05.780 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UcyKdYLWBg 00:19:05.780 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:06.037 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UcyKdYLWBg 00:19:06.037 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:06.037 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:06.037 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:06.037 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UcyKdYLWBg 00:19:06.037 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:06.037 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:06.037 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=98400 00:19:06.037 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:06.037 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 98400 /var/tmp/bdevperf.sock 00:19:06.037 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 98400 ']' 00:19:06.037 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:06.037 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.037 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:06.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:06.037 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.038 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.038 [2024-12-11 09:57:15.539768] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:06.038 [2024-12-11 09:57:15.539818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98400 ] 00:19:06.295 [2024-12-11 09:57:15.617097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.295 [2024-12-11 09:57:15.656263] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.295 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.295 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:06.295 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UcyKdYLWBg 00:19:06.552 09:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:06.552 [2024-12-11 09:57:16.119411] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:06.810 TLSTESTn1 00:19:06.810 09:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:06.810 Running I/O for 10 seconds... 00:19:09.114 5478.00 IOPS, 21.40 MiB/s [2024-12-11T08:57:19.626Z] 5579.50 IOPS, 21.79 MiB/s [2024-12-11T08:57:20.558Z] 5546.00 IOPS, 21.66 MiB/s [2024-12-11T08:57:21.491Z] 5594.00 IOPS, 21.85 MiB/s [2024-12-11T08:57:22.423Z] 5608.60 IOPS, 21.91 MiB/s [2024-12-11T08:57:23.355Z] 5625.33 IOPS, 21.97 MiB/s [2024-12-11T08:57:24.726Z] 5618.57 IOPS, 21.95 MiB/s [2024-12-11T08:57:25.663Z] 5620.25 IOPS, 21.95 MiB/s [2024-12-11T08:57:26.594Z] 5602.44 IOPS, 21.88 MiB/s [2024-12-11T08:57:26.594Z] 5619.50 IOPS, 21.95 MiB/s 00:19:17.020 Latency(us) 00:19:17.020 [2024-12-11T08:57:26.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.020 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:17.020 Verification LBA range: start 0x0 length 0x2000 00:19:17.020 TLSTESTn1 : 10.01 5625.16 21.97 0.00 0.00 22721.84 4618.73 22219.82 00:19:17.020 [2024-12-11T08:57:26.595Z] =================================================================================================================== 00:19:17.020 [2024-12-11T08:57:26.595Z] Total : 5625.16 21.97 0.00 0.00 22721.84 4618.73 22219.82 00:19:17.020 { 00:19:17.020 "results": [ 00:19:17.020 { 00:19:17.020 "job": "TLSTESTn1", 00:19:17.020 "core_mask": "0x4", 00:19:17.020 "workload": "verify", 00:19:17.020 "status": "finished", 00:19:17.020 "verify_range": { 00:19:17.020 "start": 0, 00:19:17.020 "length": 8192 00:19:17.020 }, 00:19:17.020 "queue_depth": 128, 00:19:17.020 "io_size": 4096, 00:19:17.020 "runtime": 10.012521, 00:19:17.020 "iops": 5625.156741244288, 00:19:17.020 "mibps": 21.9732685204855, 00:19:17.020 "io_failed": 0, 00:19:17.020 "io_timeout": 0, 00:19:17.020 "avg_latency_us": 22721.844821933748, 00:19:17.020 "min_latency_us": 4618.727619047619, 00:19:17.020 "max_latency_us": 22219.82476190476 00:19:17.020 } 00:19:17.020 ], 00:19:17.020 "core_count": 1 00:19:17.020 } 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 98400 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 98400 ']' 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 98400 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98400 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98400' 00:19:17.020 killing process with pid 98400 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 98400 00:19:17.020 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.020 00:19:17.020 Latency(us) 00:19:17.020 [2024-12-11T08:57:26.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.020 [2024-12-11T08:57:26.595Z] =================================================================================================================== 00:19:17.020 [2024-12-11T08:57:26.595Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 98400 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.UcyKdYLWBg 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UcyKdYLWBg 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UcyKdYLWBg 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UcyKdYLWBg 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UcyKdYLWBg 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100198 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100198 /var/tmp/bdevperf.sock 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 100198 ']' 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.020 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.277 [2024-12-11 09:57:26.629741] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:17.277 [2024-12-11 09:57:26.629789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100198 ] 00:19:17.277 [2024-12-11 09:57:26.703599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.277 [2024-12-11 09:57:26.740132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.277 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.277 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:17.277 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UcyKdYLWBg 00:19:17.534 [2024-12-11 09:57:27.007328] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.UcyKdYLWBg': 0100666 00:19:17.534 [2024-12-11 09:57:27.007359] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:17.534 request: 00:19:17.534 { 00:19:17.534 "name": "key0", 00:19:17.534 "path": "/tmp/tmp.UcyKdYLWBg", 00:19:17.534 "method": "keyring_file_add_key", 00:19:17.534 "req_id": 1 00:19:17.534 } 00:19:17.534 Got JSON-RPC error response 00:19:17.534 response: 00:19:17.534 { 00:19:17.534 "code": -1, 00:19:17.534 "message": "Operation not permitted" 00:19:17.534 } 00:19:17.534 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:17.792 [2024-12-11 09:57:27.195893] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.792 [2024-12-11 09:57:27.195924] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:17.792 request: 00:19:17.792 { 00:19:17.792 "name": "TLSTEST", 00:19:17.792 "trtype": "tcp", 00:19:17.792 "traddr": "10.0.0.2", 00:19:17.792 "adrfam": "ipv4", 00:19:17.792 "trsvcid": "4420", 00:19:17.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.792 "prchk_reftag": false, 00:19:17.792 "prchk_guard": false, 00:19:17.792 "hdgst": false, 00:19:17.792 "ddgst": false, 00:19:17.792 "psk": "key0", 00:19:17.792 "allow_unrecognized_csi": false, 00:19:17.792 "method": "bdev_nvme_attach_controller", 00:19:17.792 "req_id": 1 00:19:17.792 } 00:19:17.792 Got JSON-RPC error response 00:19:17.792 response: 00:19:17.792 { 00:19:17.792 "code": -126, 00:19:17.792 "message": "Required key not available" 00:19:17.792 } 00:19:17.792 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 100198 00:19:17.792 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 100198 ']' 00:19:17.792 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 100198 00:19:17.792 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:17.792 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.792 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100198 00:19:17.792 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:17.792 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:17.792 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100198' 00:19:17.792 killing process with pid 100198 00:19:17.792 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 100198 00:19:17.792 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.792 00:19:17.792 Latency(us) 00:19:17.792 [2024-12-11T08:57:27.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.792 [2024-12-11T08:57:27.367Z] =================================================================================================================== 00:19:17.792 [2024-12-11T08:57:27.367Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:17.792 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 100198 00:19:18.050 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:18.050 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:18.050 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:18.050 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:18.050 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:18.050 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 97931 00:19:18.050 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 97931 ']' 00:19:18.050 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 97931 00:19:18.050 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:18.050 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.050 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97931 00:19:18.050 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:18.050 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:18.050 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97931' 00:19:18.050 killing process with pid 97931 00:19:18.050 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 97931 00:19:18.050 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 97931 00:19:18.308 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:18.308 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:18.308 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:18.308 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.308 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=100442 00:19:18.308 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 100442 00:19:18.308 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:18.308 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 100442 ']' 00:19:18.308 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.308 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.308 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.308 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.308 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.308 [2024-12-11 09:57:27.705412] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:18.308 [2024-12-11 09:57:27.705461] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.308 [2024-12-11 09:57:27.789914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.308 [2024-12-11 09:57:27.828658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.308 [2024-12-11 09:57:27.828696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.308 [2024-12-11 09:57:27.828703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.308 [2024-12-11 09:57:27.828709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.308 [2024-12-11 09:57:27.828714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.308 [2024-12-11 09:57:27.829266] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.242 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.242 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:19.242 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:19.242 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:19.242 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.242 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.242 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.UcyKdYLWBg 00:19:19.242 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:19.242 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.UcyKdYLWBg 00:19:19.242 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:19.242 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.242 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:19.242 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.242 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.UcyKdYLWBg 00:19:19.242 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UcyKdYLWBg 00:19:19.242 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:19.242 [2024-12-11 09:57:28.747674] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.242 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:19.500 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:19.758 [2024-12-11 09:57:29.128649] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:19.758 [2024-12-11 09:57:29.128865] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.758 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:19.758 malloc0 00:19:20.015 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:20.015 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UcyKdYLWBg 00:19:20.272 [2024-12-11 09:57:29.710106] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.UcyKdYLWBg': 0100666 00:19:20.272 [2024-12-11 09:57:29.710129] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:20.272 request: 00:19:20.272 { 00:19:20.272 "name": "key0", 00:19:20.272 "path": "/tmp/tmp.UcyKdYLWBg", 00:19:20.272 "method": "keyring_file_add_key", 00:19:20.272 "req_id": 1 00:19:20.272 } 00:19:20.272 Got JSON-RPC error response 00:19:20.272 response: 00:19:20.272 { 00:19:20.272 "code": -1, 00:19:20.272 "message": "Operation not permitted" 00:19:20.272 } 00:19:20.272 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:20.529 [2024-12-11 09:57:29.898613] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:20.529 [2024-12-11 09:57:29.898652] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:20.529 request: 00:19:20.529 { 00:19:20.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.529 "host": "nqn.2016-06.io.spdk:host1", 00:19:20.529 "psk": "key0", 00:19:20.529 "method": "nvmf_subsystem_add_host", 00:19:20.529 "req_id": 1 00:19:20.529 } 00:19:20.529 Got JSON-RPC error response 00:19:20.529 response: 00:19:20.529 { 00:19:20.529 "code": -32603, 00:19:20.529 "message": "Internal error" 00:19:20.529 } 00:19:20.529 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:20.529 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:20.529 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:20.529 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:20.529 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 100442 00:19:20.529 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 100442 ']' 00:19:20.529 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 100442 00:19:20.529 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:20.529 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.529 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100442 00:19:20.529 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:20.529 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:20.529 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100442' 00:19:20.529 killing process with pid 100442 00:19:20.529 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 100442 00:19:20.529 09:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 100442 00:19:20.787 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.UcyKdYLWBg 00:19:20.787 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:20.787 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:20.787 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:20.787 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.787 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=100750 00:19:20.787 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:20.787 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 100750 00:19:20.787 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 100750 ']' 00:19:20.787 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.787 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.787 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.787 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.787 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.787 [2024-12-11 09:57:30.195412] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:20.787 [2024-12-11 09:57:30.195459] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.787 [2024-12-11 09:57:30.279607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.787 [2024-12-11 09:57:30.316628] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.787 [2024-12-11 09:57:30.316664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.787 [2024-12-11 09:57:30.316671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.787 [2024-12-11 09:57:30.316677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.787 [2024-12-11 09:57:30.316683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.787 [2024-12-11 09:57:30.317240] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.045 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.045 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:21.045 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:21.045 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:21.045 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.045 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.045 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.UcyKdYLWBg 00:19:21.045 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UcyKdYLWBg 00:19:21.045 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:21.303 [2024-12-11 09:57:30.634159] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.303 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:21.303 09:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:21.560 [2024-12-11 09:57:31.031183] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:21.560 [2024-12-11 09:57:31.031400] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.560 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:21.817 malloc0 00:19:21.818 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:22.076 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UcyKdYLWBg 00:19:22.076 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:22.334 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=101173 00:19:22.334 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:22.334 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:22.334 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 101173 /var/tmp/bdevperf.sock 00:19:22.334 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 101173 ']' 00:19:22.334 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.334 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.334 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.334 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.334 09:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.334 [2024-12-11 09:57:31.898823] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:22.334 [2024-12-11 09:57:31.898873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101173 ] 00:19:22.592 [2024-12-11 09:57:31.977344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.592 [2024-12-11 09:57:32.016422] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.592 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.592 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:22.592 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UcyKdYLWBg 00:19:22.849 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:23.107 [2024-12-11 09:57:32.464083] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.107 TLSTESTn1 00:19:23.107 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:23.366 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:23.366 "subsystems": [ 00:19:23.366 { 00:19:23.366 "subsystem": "keyring", 00:19:23.366 "config": [ 00:19:23.366 { 00:19:23.366 "method": "keyring_file_add_key", 00:19:23.366 "params": { 00:19:23.366 "name": "key0", 00:19:23.366 "path": "/tmp/tmp.UcyKdYLWBg" 00:19:23.366 } 00:19:23.366 } 00:19:23.366 ] 00:19:23.366 }, 00:19:23.366 { 00:19:23.366 "subsystem": "iobuf", 00:19:23.366 "config": [ 00:19:23.366 { 00:19:23.366 "method": "iobuf_set_options", 00:19:23.366 "params": { 00:19:23.366 "small_pool_count": 8192, 00:19:23.366 "large_pool_count": 1024, 00:19:23.366 "small_bufsize": 8192, 00:19:23.366 "large_bufsize": 135168, 00:19:23.366 "enable_numa": false 00:19:23.366 } 00:19:23.366 } 00:19:23.366 ] 00:19:23.366 }, 00:19:23.366 { 00:19:23.366 "subsystem": "sock", 00:19:23.366 "config": [ 00:19:23.366 { 00:19:23.366 "method": "sock_set_default_impl", 00:19:23.366 "params": { 00:19:23.366 "impl_name": "posix" 00:19:23.366 } 00:19:23.366 }, 00:19:23.366 { 00:19:23.366 "method": "sock_impl_set_options", 00:19:23.366 "params": { 00:19:23.366 "impl_name": "ssl", 00:19:23.366 "recv_buf_size": 4096, 00:19:23.366 "send_buf_size": 4096, 00:19:23.366 "enable_recv_pipe": true, 00:19:23.366 "enable_quickack": false, 00:19:23.366 "enable_placement_id": 0, 00:19:23.366 "enable_zerocopy_send_server": true, 00:19:23.366 "enable_zerocopy_send_client": false, 00:19:23.366 "zerocopy_threshold": 0, 00:19:23.366 "tls_version": 0, 00:19:23.366 "enable_ktls": false 00:19:23.366 } 00:19:23.366 }, 00:19:23.366 { 00:19:23.366 "method": "sock_impl_set_options", 00:19:23.366 "params": { 00:19:23.366 "impl_name": "posix", 00:19:23.366 "recv_buf_size": 2097152, 00:19:23.366 "send_buf_size": 2097152, 00:19:23.366 "enable_recv_pipe": true, 00:19:23.366 "enable_quickack": false, 00:19:23.366 "enable_placement_id": 0, 00:19:23.366 "enable_zerocopy_send_server": true, 00:19:23.366 "enable_zerocopy_send_client": false, 00:19:23.366 "zerocopy_threshold": 0, 00:19:23.366 "tls_version": 0, 00:19:23.366 "enable_ktls": false 00:19:23.366 } 00:19:23.366 } 00:19:23.366 ] 00:19:23.366 }, 00:19:23.366 { 00:19:23.366 "subsystem": "vmd", 00:19:23.366 "config": [] 00:19:23.366 }, 00:19:23.366 { 00:19:23.366 "subsystem": "accel", 00:19:23.366 "config": [ 00:19:23.366 { 00:19:23.366 "method": "accel_set_options", 00:19:23.366 "params": { 00:19:23.366 "small_cache_size": 128, 00:19:23.366 "large_cache_size": 16, 00:19:23.366 "task_count": 2048, 00:19:23.366 "sequence_count": 2048, 00:19:23.366 "buf_count": 2048 00:19:23.366 } 00:19:23.366 } 00:19:23.366 ] 00:19:23.366 }, 00:19:23.366 { 00:19:23.366 "subsystem": "bdev", 00:19:23.366 "config": [ 00:19:23.366 { 00:19:23.366 "method": "bdev_set_options", 00:19:23.366 "params": { 00:19:23.366 "bdev_io_pool_size": 65535, 00:19:23.366 "bdev_io_cache_size": 256, 00:19:23.366 "bdev_auto_examine": true, 00:19:23.366 "iobuf_small_cache_size": 128, 00:19:23.366 "iobuf_large_cache_size": 16 00:19:23.366 } 00:19:23.366 }, 00:19:23.366 { 00:19:23.366 "method": "bdev_raid_set_options", 00:19:23.366 "params": { 00:19:23.366 "process_window_size_kb": 1024, 00:19:23.366 "process_max_bandwidth_mb_sec": 0 00:19:23.366 } 00:19:23.366 }, 00:19:23.366 { 00:19:23.366 "method": "bdev_iscsi_set_options", 00:19:23.366 "params": { 00:19:23.366 "timeout_sec": 30 00:19:23.366 } 00:19:23.366 }, 00:19:23.366 { 00:19:23.366 "method": "bdev_nvme_set_options", 00:19:23.366 "params": { 00:19:23.366 "action_on_timeout": "none", 00:19:23.366 "timeout_us": 0, 00:19:23.366 "timeout_admin_us": 0, 00:19:23.366 "keep_alive_timeout_ms": 10000, 00:19:23.366 "arbitration_burst": 0, 00:19:23.366 "low_priority_weight": 0, 00:19:23.366 "medium_priority_weight": 0, 00:19:23.366 "high_priority_weight": 0, 00:19:23.367 "nvme_adminq_poll_period_us": 10000, 00:19:23.367 "nvme_ioq_poll_period_us": 0, 00:19:23.367 "io_queue_requests": 0, 00:19:23.367 "delay_cmd_submit": true, 00:19:23.367 "transport_retry_count": 4, 00:19:23.367 "bdev_retry_count": 3, 00:19:23.367 "transport_ack_timeout": 0, 00:19:23.367 "ctrlr_loss_timeout_sec": 0, 00:19:23.367 "reconnect_delay_sec": 0, 00:19:23.367 "fast_io_fail_timeout_sec": 0, 00:19:23.367 "disable_auto_failback": false, 00:19:23.367 "generate_uuids": false, 00:19:23.367 "transport_tos": 0, 00:19:23.367 "nvme_error_stat": false, 00:19:23.367 "rdma_srq_size": 0, 00:19:23.367 "io_path_stat": false, 00:19:23.367 "allow_accel_sequence": false, 00:19:23.367 "rdma_max_cq_size": 0, 00:19:23.367 "rdma_cm_event_timeout_ms": 0, 00:19:23.367 "dhchap_digests": [ 00:19:23.367 "sha256", 00:19:23.367 "sha384", 00:19:23.367 "sha512" 00:19:23.367 ], 00:19:23.367 "dhchap_dhgroups": [ 00:19:23.367 "null", 00:19:23.367 "ffdhe2048", 00:19:23.367 "ffdhe3072", 00:19:23.367 "ffdhe4096", 00:19:23.367 "ffdhe6144", 00:19:23.367 "ffdhe8192" 00:19:23.367 ], 00:19:23.367 "rdma_umr_per_io": false 00:19:23.367 } 00:19:23.367 }, 00:19:23.367 { 00:19:23.367 "method": "bdev_nvme_set_hotplug", 00:19:23.367 "params": { 00:19:23.367 "period_us": 100000, 00:19:23.367 "enable": false 00:19:23.367 } 00:19:23.367 }, 00:19:23.367 { 00:19:23.367 "method": "bdev_malloc_create", 00:19:23.367 "params": { 00:19:23.367 "name": "malloc0", 00:19:23.367 "num_blocks": 8192, 00:19:23.367 "block_size": 4096, 00:19:23.367 "physical_block_size": 4096, 00:19:23.367 "uuid": "9b05a54d-78a3-4ec8-9d2c-0d194ca4f4a0", 00:19:23.367 "optimal_io_boundary": 0, 00:19:23.367 "md_size": 0, 00:19:23.367 "dif_type": 0, 00:19:23.367 "dif_is_head_of_md": false, 00:19:23.367 "dif_pi_format": 0 00:19:23.367 } 00:19:23.367 }, 00:19:23.367 { 00:19:23.367 "method": "bdev_wait_for_examine" 00:19:23.367 } 00:19:23.367 ] 00:19:23.367 }, 00:19:23.367 { 00:19:23.367 "subsystem": "nbd", 00:19:23.367 "config": [] 00:19:23.367 }, 00:19:23.367 { 00:19:23.367 "subsystem": "scheduler", 00:19:23.367 "config": [ 00:19:23.367 { 00:19:23.367 "method": "framework_set_scheduler", 00:19:23.367 "params": { 00:19:23.367 "name": "static" 00:19:23.367 } 00:19:23.367 } 00:19:23.367 ] 00:19:23.367 }, 00:19:23.367 { 00:19:23.367 "subsystem": "nvmf", 00:19:23.367 "config": [ 00:19:23.367 { 00:19:23.367 "method": "nvmf_set_config", 00:19:23.367 "params": { 00:19:23.367 "discovery_filter": "match_any", 00:19:23.367 "admin_cmd_passthru": { 00:19:23.367 "identify_ctrlr": false 00:19:23.367 }, 00:19:23.367 "dhchap_digests": [ 00:19:23.367 "sha256", 00:19:23.367 "sha384", 00:19:23.367 "sha512" 00:19:23.367 ], 00:19:23.367 "dhchap_dhgroups": [ 00:19:23.367 "null", 00:19:23.367 "ffdhe2048", 00:19:23.367 "ffdhe3072", 00:19:23.367 "ffdhe4096", 00:19:23.367 "ffdhe6144", 00:19:23.367 "ffdhe8192" 00:19:23.367 ] 00:19:23.367 } 00:19:23.367 }, 00:19:23.367 { 00:19:23.367 "method": "nvmf_set_max_subsystems", 00:19:23.367 "params": { 00:19:23.367 "max_subsystems": 1024 00:19:23.367 } 00:19:23.367 }, 00:19:23.367 { 00:19:23.367 "method": "nvmf_set_crdt", 00:19:23.367 "params": { 00:19:23.367 "crdt1": 0, 00:19:23.367 "crdt2": 0, 00:19:23.367 "crdt3": 0 00:19:23.367 } 00:19:23.367 }, 00:19:23.367 { 00:19:23.367 "method": "nvmf_create_transport", 00:19:23.367 "params": { 00:19:23.367 "trtype": "TCP", 00:19:23.367 "max_queue_depth": 128, 00:19:23.367 "max_io_qpairs_per_ctrlr": 127, 00:19:23.367 "in_capsule_data_size": 4096, 00:19:23.367 "max_io_size": 131072, 00:19:23.367 "io_unit_size": 131072, 00:19:23.367 "max_aq_depth": 128, 00:19:23.367 "num_shared_buffers": 511, 00:19:23.367 "buf_cache_size": 4294967295, 00:19:23.367 "dif_insert_or_strip": false, 00:19:23.367 "zcopy": false, 00:19:23.367 "c2h_success": false, 00:19:23.367 "sock_priority": 0, 00:19:23.367 "abort_timeout_sec": 1, 00:19:23.367 "ack_timeout": 0, 00:19:23.367 "data_wr_pool_size": 0 00:19:23.367 } 00:19:23.367 }, 00:19:23.367 { 00:19:23.367 "method": "nvmf_create_subsystem", 00:19:23.367 "params": { 00:19:23.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.367 "allow_any_host": false, 00:19:23.367 "serial_number": "SPDK00000000000001", 00:19:23.367 "model_number": "SPDK bdev Controller", 00:19:23.367 "max_namespaces": 10, 00:19:23.367 "min_cntlid": 1, 00:19:23.367 "max_cntlid": 65519, 00:19:23.367 "ana_reporting": false 00:19:23.367 } 00:19:23.367 }, 00:19:23.367 { 00:19:23.367 "method": "nvmf_subsystem_add_host", 00:19:23.367 "params": { 00:19:23.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.367 "host": "nqn.2016-06.io.spdk:host1", 00:19:23.367 "psk": "key0" 00:19:23.367 } 00:19:23.367 }, 00:19:23.367 { 00:19:23.367 "method": "nvmf_subsystem_add_ns", 00:19:23.367 "params": { 00:19:23.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.367 "namespace": { 00:19:23.367 "nsid": 1, 00:19:23.367 "bdev_name": "malloc0", 00:19:23.367 "nguid": "9B05A54D78A34EC89D2C0D194CA4F4A0", 00:19:23.367 "uuid": "9b05a54d-78a3-4ec8-9d2c-0d194ca4f4a0", 00:19:23.367 "no_auto_visible": false 00:19:23.367 } 00:19:23.367 } 00:19:23.367 }, 00:19:23.367 { 00:19:23.367 "method": "nvmf_subsystem_add_listener", 00:19:23.367 "params": { 00:19:23.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.367 "listen_address": { 00:19:23.367 "trtype": "TCP", 00:19:23.367 "adrfam": "IPv4", 00:19:23.367 "traddr": "10.0.0.2", 00:19:23.367 "trsvcid": "4420" 00:19:23.367 }, 00:19:23.367 "secure_channel": true 00:19:23.367 } 00:19:23.367 } 00:19:23.367 ] 00:19:23.367 } 00:19:23.367 ] 00:19:23.367 }' 00:19:23.367 09:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:23.626 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:23.626 "subsystems": [ 00:19:23.626 { 00:19:23.626 "subsystem": "keyring", 00:19:23.626 "config": [ 00:19:23.626 { 00:19:23.626 "method": "keyring_file_add_key", 00:19:23.626 "params": { 00:19:23.626 "name": "key0", 00:19:23.626 "path": "/tmp/tmp.UcyKdYLWBg" 00:19:23.626 } 00:19:23.626 } 00:19:23.626 ] 00:19:23.626 }, 00:19:23.626 { 00:19:23.626 "subsystem": "iobuf", 00:19:23.626 "config": [ 00:19:23.626 { 00:19:23.626 "method": "iobuf_set_options", 00:19:23.626 "params": { 00:19:23.626 "small_pool_count": 8192, 00:19:23.626 "large_pool_count": 1024, 00:19:23.626 "small_bufsize": 8192, 00:19:23.626 "large_bufsize": 135168, 00:19:23.626 "enable_numa": false 00:19:23.626 } 00:19:23.626 } 00:19:23.626 ] 00:19:23.626 }, 00:19:23.626 { 00:19:23.626 "subsystem": "sock", 00:19:23.626 "config": [ 00:19:23.626 { 00:19:23.626 "method": "sock_set_default_impl", 00:19:23.626 "params": { 00:19:23.626 "impl_name": "posix" 00:19:23.626 } 00:19:23.626 }, 00:19:23.626 { 00:19:23.626 "method": "sock_impl_set_options", 00:19:23.626 "params": { 00:19:23.626 "impl_name": "ssl", 00:19:23.626 "recv_buf_size": 4096, 00:19:23.626 "send_buf_size": 4096, 00:19:23.626 "enable_recv_pipe": true, 00:19:23.626 "enable_quickack": false, 00:19:23.626 "enable_placement_id": 0, 00:19:23.626 "enable_zerocopy_send_server": true, 00:19:23.626 "enable_zerocopy_send_client": false, 00:19:23.626 "zerocopy_threshold": 0, 00:19:23.626 "tls_version": 0, 00:19:23.626 "enable_ktls": false 00:19:23.626 } 00:19:23.626 }, 00:19:23.626 { 00:19:23.626 "method": "sock_impl_set_options", 00:19:23.626 "params": { 00:19:23.626 "impl_name": "posix", 00:19:23.626 "recv_buf_size": 2097152, 00:19:23.626 "send_buf_size": 2097152, 00:19:23.626 "enable_recv_pipe": true, 00:19:23.626 "enable_quickack": false, 00:19:23.626 "enable_placement_id": 0, 00:19:23.626 "enable_zerocopy_send_server": true, 00:19:23.626 "enable_zerocopy_send_client": false, 00:19:23.626 "zerocopy_threshold": 0, 00:19:23.626 "tls_version": 0, 00:19:23.626 "enable_ktls": false 00:19:23.626 } 00:19:23.626 } 00:19:23.626 ] 00:19:23.626 }, 00:19:23.626 { 00:19:23.626 "subsystem": "vmd", 00:19:23.626 "config": [] 00:19:23.626 }, 00:19:23.626 { 00:19:23.626 "subsystem": "accel", 00:19:23.626 "config": [ 00:19:23.626 { 00:19:23.626 "method": "accel_set_options", 00:19:23.626 "params": { 00:19:23.626 "small_cache_size": 128, 00:19:23.626 "large_cache_size": 16, 00:19:23.626 "task_count": 2048, 00:19:23.626 "sequence_count": 2048, 00:19:23.626 "buf_count": 2048 00:19:23.626 } 00:19:23.626 } 00:19:23.626 ] 00:19:23.626 }, 00:19:23.626 { 00:19:23.626 "subsystem": "bdev", 00:19:23.626 "config": [ 00:19:23.626 { 00:19:23.626 "method": "bdev_set_options", 00:19:23.626 "params": { 00:19:23.626 "bdev_io_pool_size": 65535, 00:19:23.626 "bdev_io_cache_size": 256, 00:19:23.626 "bdev_auto_examine": true, 00:19:23.626 "iobuf_small_cache_size": 128, 00:19:23.626 "iobuf_large_cache_size": 16 00:19:23.626 } 00:19:23.626 }, 00:19:23.626 { 00:19:23.626 "method": "bdev_raid_set_options", 00:19:23.626 "params": { 00:19:23.626 "process_window_size_kb": 1024, 00:19:23.626 "process_max_bandwidth_mb_sec": 0 00:19:23.626 } 00:19:23.626 }, 00:19:23.626 { 00:19:23.626 "method": "bdev_iscsi_set_options", 00:19:23.626 "params": { 00:19:23.626 "timeout_sec": 30 00:19:23.626 } 00:19:23.626 }, 00:19:23.626 { 00:19:23.626 "method": "bdev_nvme_set_options", 00:19:23.626 "params": { 00:19:23.626 "action_on_timeout": "none", 00:19:23.626 "timeout_us": 0, 00:19:23.626 "timeout_admin_us": 0, 00:19:23.626 "keep_alive_timeout_ms": 10000, 00:19:23.626 "arbitration_burst": 0, 00:19:23.626 "low_priority_weight": 0, 00:19:23.626 "medium_priority_weight": 0, 00:19:23.626 "high_priority_weight": 0, 00:19:23.626 "nvme_adminq_poll_period_us": 10000, 00:19:23.626 "nvme_ioq_poll_period_us": 0, 00:19:23.626 "io_queue_requests": 512, 00:19:23.626 "delay_cmd_submit": true, 00:19:23.626 "transport_retry_count": 4, 00:19:23.626 "bdev_retry_count": 3, 00:19:23.626 "transport_ack_timeout": 0, 00:19:23.626 "ctrlr_loss_timeout_sec": 0, 00:19:23.627 "reconnect_delay_sec": 0, 00:19:23.627 "fast_io_fail_timeout_sec": 0, 00:19:23.627 "disable_auto_failback": false, 00:19:23.627 "generate_uuids": false, 00:19:23.627 "transport_tos": 0, 00:19:23.627 "nvme_error_stat": false, 00:19:23.627 "rdma_srq_size": 0, 00:19:23.627 "io_path_stat": false, 00:19:23.627 "allow_accel_sequence": false, 00:19:23.627 "rdma_max_cq_size": 0, 00:19:23.627 "rdma_cm_event_timeout_ms": 0, 00:19:23.627 "dhchap_digests": [ 00:19:23.627 "sha256", 00:19:23.627 "sha384", 00:19:23.627 "sha512" 00:19:23.627 ], 00:19:23.627 "dhchap_dhgroups": [ 00:19:23.627 "null", 00:19:23.627 "ffdhe2048", 00:19:23.627 "ffdhe3072", 00:19:23.627 "ffdhe4096", 00:19:23.627 "ffdhe6144", 00:19:23.627 "ffdhe8192" 00:19:23.627 ], 00:19:23.627 "rdma_umr_per_io": false 00:19:23.627 } 00:19:23.627 }, 00:19:23.627 { 00:19:23.627 "method": "bdev_nvme_attach_controller", 00:19:23.627 "params": { 00:19:23.627 "name": "TLSTEST", 00:19:23.627 "trtype": "TCP", 00:19:23.627 "adrfam": "IPv4", 00:19:23.627 "traddr": "10.0.0.2", 00:19:23.627 "trsvcid": "4420", 00:19:23.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.627 "prchk_reftag": false, 00:19:23.627 "prchk_guard": false, 00:19:23.627 "ctrlr_loss_timeout_sec": 0, 00:19:23.627 "reconnect_delay_sec": 0, 00:19:23.627 "fast_io_fail_timeout_sec": 0, 00:19:23.627 "psk": "key0", 00:19:23.627 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:23.627 "hdgst": false, 00:19:23.627 "ddgst": false, 00:19:23.627 "multipath": "multipath" 00:19:23.627 } 00:19:23.627 }, 00:19:23.627 { 00:19:23.627 "method": "bdev_nvme_set_hotplug", 00:19:23.627 "params": { 00:19:23.627 "period_us": 100000, 00:19:23.627 "enable": false 00:19:23.627 } 00:19:23.627 }, 00:19:23.627 { 00:19:23.627 "method": "bdev_wait_for_examine" 00:19:23.627 } 00:19:23.627 ] 00:19:23.627 }, 00:19:23.627 { 00:19:23.627 "subsystem": "nbd", 00:19:23.627 "config": [] 00:19:23.627 } 00:19:23.627 ] 00:19:23.627 }' 00:19:23.627 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 101173 00:19:23.627 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 101173 ']' 00:19:23.627 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 101173 00:19:23.627 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:23.627 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.627 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101173 00:19:23.627 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:23.627 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:23.627 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101173' 00:19:23.627 killing process with pid 101173 00:19:23.627 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 101173 00:19:23.627 Received shutdown signal, test time was about 10.000000 seconds 00:19:23.627 00:19:23.627 Latency(us) 00:19:23.627 [2024-12-11T08:57:33.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.627 [2024-12-11T08:57:33.202Z] =================================================================================================================== 00:19:23.627 [2024-12-11T08:57:33.202Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:23.627 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 101173 00:19:23.885 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 100750 00:19:23.885 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 100750 ']' 00:19:23.885 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 100750 00:19:23.885 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:23.886 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.886 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100750 00:19:23.886 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:23.886 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:23.886 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100750' 00:19:23.886 killing process with pid 100750 00:19:23.886 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 100750 00:19:23.886 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 100750 00:19:24.144 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:24.144 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:24.144 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:24.144 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:24.144 "subsystems": [ 00:19:24.144 { 00:19:24.144 "subsystem": "keyring", 00:19:24.144 "config": [ 00:19:24.144 { 00:19:24.144 "method": "keyring_file_add_key", 00:19:24.144 "params": { 00:19:24.144 "name": "key0", 00:19:24.144 "path": "/tmp/tmp.UcyKdYLWBg" 00:19:24.144 } 00:19:24.144 } 00:19:24.144 ] 00:19:24.144 }, 00:19:24.144 { 00:19:24.144 "subsystem": "iobuf", 00:19:24.144 "config": [ 00:19:24.144 { 00:19:24.144 "method": "iobuf_set_options", 00:19:24.144 "params": { 00:19:24.144 "small_pool_count": 8192, 00:19:24.144 "large_pool_count": 1024, 00:19:24.144 "small_bufsize": 8192, 00:19:24.144 "large_bufsize": 135168, 00:19:24.144 "enable_numa": false 00:19:24.144 } 00:19:24.144 } 00:19:24.144 ] 00:19:24.144 }, 00:19:24.144 { 00:19:24.144 "subsystem": "sock", 00:19:24.144 "config": [ 00:19:24.144 { 00:19:24.144 "method": "sock_set_default_impl", 00:19:24.144 "params": { 00:19:24.144 "impl_name": "posix" 00:19:24.144 } 00:19:24.144 }, 00:19:24.144 { 00:19:24.144 "method": "sock_impl_set_options", 00:19:24.144 "params": { 00:19:24.144 "impl_name": "ssl", 00:19:24.144 "recv_buf_size": 4096, 00:19:24.144 "send_buf_size": 4096, 00:19:24.144 "enable_recv_pipe": true, 00:19:24.144 "enable_quickack": false, 00:19:24.144 "enable_placement_id": 0, 00:19:24.144 "enable_zerocopy_send_server": true, 00:19:24.144 "enable_zerocopy_send_client": false, 00:19:24.144 "zerocopy_threshold": 0, 00:19:24.144 "tls_version": 0, 00:19:24.144 "enable_ktls": false 00:19:24.144 } 00:19:24.144 }, 00:19:24.144 { 00:19:24.144 "method": "sock_impl_set_options", 00:19:24.144 "params": { 00:19:24.144 "impl_name": "posix", 00:19:24.144 "recv_buf_size": 2097152, 00:19:24.144 "send_buf_size": 2097152, 00:19:24.144 "enable_recv_pipe": true, 00:19:24.144 "enable_quickack": false, 00:19:24.144 "enable_placement_id": 0, 00:19:24.144 "enable_zerocopy_send_server": true, 00:19:24.144 "enable_zerocopy_send_client": false, 00:19:24.145 "zerocopy_threshold": 0, 00:19:24.145 "tls_version": 0, 00:19:24.145 "enable_ktls": false 00:19:24.145 } 00:19:24.145 } 00:19:24.145 ] 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "subsystem": "vmd", 00:19:24.145 "config": [] 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "subsystem": "accel", 00:19:24.145 "config": [ 00:19:24.145 { 00:19:24.145 "method": "accel_set_options", 00:19:24.145 "params": { 00:19:24.145 "small_cache_size": 128, 00:19:24.145 "large_cache_size": 16, 00:19:24.145 "task_count": 2048, 00:19:24.145 "sequence_count": 2048, 00:19:24.145 "buf_count": 2048 00:19:24.145 } 00:19:24.145 } 00:19:24.145 ] 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "subsystem": "bdev", 00:19:24.145 "config": [ 00:19:24.145 { 00:19:24.145 "method": "bdev_set_options", 00:19:24.145 "params": { 00:19:24.145 "bdev_io_pool_size": 65535, 00:19:24.145 "bdev_io_cache_size": 256, 00:19:24.145 "bdev_auto_examine": true, 00:19:24.145 "iobuf_small_cache_size": 128, 00:19:24.145 "iobuf_large_cache_size": 16 00:19:24.145 } 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "method": "bdev_raid_set_options", 00:19:24.145 "params": { 00:19:24.145 "process_window_size_kb": 1024, 00:19:24.145 "process_max_bandwidth_mb_sec": 0 00:19:24.145 } 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "method": "bdev_iscsi_set_options", 00:19:24.145 "params": { 00:19:24.145 "timeout_sec": 30 00:19:24.145 } 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "method": "bdev_nvme_set_options", 00:19:24.145 "params": { 00:19:24.145 "action_on_timeout": "none", 00:19:24.145 "timeout_us": 0, 00:19:24.145 "timeout_admin_us": 0, 00:19:24.145 "keep_alive_timeout_ms": 10000, 00:19:24.145 "arbitration_burst": 0, 00:19:24.145 "low_priority_weight": 0, 00:19:24.145 "medium_priority_weight": 0, 00:19:24.145 "high_priority_weight": 0, 00:19:24.145 "nvme_adminq_poll_period_us": 10000, 00:19:24.145 "nvme_ioq_poll_period_us": 0, 00:19:24.145 "io_queue_requests": 0, 00:19:24.145 "delay_cmd_submit": true, 00:19:24.145 "transport_retry_count": 4, 00:19:24.145 "bdev_retry_count": 3, 00:19:24.145 "transport_ack_timeout": 0, 00:19:24.145 "ctrlr_loss_timeout_sec": 0, 00:19:24.145 "reconnect_delay_sec": 0, 00:19:24.145 "fast_io_fail_timeout_sec": 0, 00:19:24.145 "disable_auto_failback": false, 00:19:24.145 "generate_uuids": false, 00:19:24.145 "transport_tos": 0, 00:19:24.145 "nvme_error_stat": false, 00:19:24.145 "rdma_srq_size": 0, 00:19:24.145 "io_path_stat": false, 00:19:24.145 "allow_accel_sequence": false, 00:19:24.145 "rdma_max_cq_size": 0, 00:19:24.145 "rdma_cm_event_timeout_ms": 0, 00:19:24.145 "dhchap_digests": [ 00:19:24.145 "sha256", 00:19:24.145 "sha384", 00:19:24.145 "sha512" 00:19:24.145 ], 00:19:24.145 "dhchap_dhgroups": [ 00:19:24.145 "null", 00:19:24.145 "ffdhe2048", 00:19:24.145 "ffdhe3072", 00:19:24.145 "ffdhe4096", 00:19:24.145 "ffdhe6144", 00:19:24.145 "ffdhe8192" 00:19:24.145 ], 00:19:24.145 "rdma_umr_per_io": false 00:19:24.145 } 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "method": "bdev_nvme_set_hotplug", 00:19:24.145 "params": { 00:19:24.145 "period_us": 100000, 00:19:24.145 "enable": false 00:19:24.145 } 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "method": "bdev_malloc_create", 00:19:24.145 "params": { 00:19:24.145 "name": "malloc0", 00:19:24.145 "num_blocks": 8192, 00:19:24.145 "block_size": 4096, 00:19:24.145 "physical_block_size": 4096, 00:19:24.145 "uuid": "9b05a54d-78a3-4ec8-9d2c-0d194ca4f4a0", 00:19:24.145 "optimal_io_boundary": 0, 00:19:24.145 "md_size": 0, 00:19:24.145 "dif_type": 0, 00:19:24.145 "dif_is_head_of_md": false, 00:19:24.145 "dif_pi_format": 0 00:19:24.145 } 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "method": "bdev_wait_for_examine" 00:19:24.145 } 00:19:24.145 ] 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "subsystem": "nbd", 00:19:24.145 "config": [] 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "subsystem": "scheduler", 00:19:24.145 "config": [ 00:19:24.145 { 00:19:24.145 "method": "framework_set_scheduler", 00:19:24.145 "params": { 00:19:24.145 "name": "static" 00:19:24.145 } 00:19:24.145 } 00:19:24.145 ] 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "subsystem": "nvmf", 00:19:24.145 "config": [ 00:19:24.145 { 00:19:24.145 "method": "nvmf_set_config", 00:19:24.145 "params": { 00:19:24.145 "discovery_filter": "match_any", 00:19:24.145 "admin_cmd_passthru": { 00:19:24.145 "identify_ctrlr": false 00:19:24.145 }, 00:19:24.145 "dhchap_digests": [ 00:19:24.145 "sha256", 00:19:24.145 "sha384", 00:19:24.145 "sha512" 00:19:24.145 ], 00:19:24.145 "dhchap_dhgroups": [ 00:19:24.145 "null", 00:19:24.145 "ffdhe2048", 00:19:24.145 "ffdhe3072", 00:19:24.145 "ffdhe4096", 00:19:24.145 "ffdhe6144", 00:19:24.145 "ffdhe8192" 00:19:24.145 ] 00:19:24.145 } 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "method": "nvmf_set_max_subsystems", 00:19:24.145 "params": { 00:19:24.145 "max_subsystems": 1024 00:19:24.145 } 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "method": "nvmf_set_crdt", 00:19:24.145 "params": { 00:19:24.145 "crdt1": 0, 00:19:24.145 "crdt2": 0, 00:19:24.145 "crdt3": 0 00:19:24.145 } 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "method": "nvmf_create_transport", 00:19:24.145 "params": { 00:19:24.145 "trtype": "TCP", 00:19:24.145 "max_queue_depth": 128, 00:19:24.145 "max_io_qpairs_per_ctrlr": 127, 00:19:24.145 "in_capsule_data_size": 4096, 00:19:24.145 "max_io_size": 131072, 00:19:24.145 "io_unit_size": 131072, 00:19:24.145 "max_aq_depth": 128, 00:19:24.145 "num_shared_buffers": 511, 00:19:24.145 "buf_cache_size": 4294967295, 00:19:24.145 "dif_insert_or_strip": false, 00:19:24.145 "zcopy": false, 00:19:24.145 "c2h_success": false, 00:19:24.145 "sock_priority": 0, 00:19:24.145 "abort_timeout_sec": 1, 00:19:24.145 "ack_timeout": 0, 00:19:24.145 "data_wr_pool_size": 0 00:19:24.145 } 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "method": "nvmf_create_subsystem", 00:19:24.145 "params": { 00:19:24.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.145 "allow_any_host": false, 00:19:24.145 "serial_number": "SPDK00000000000001", 00:19:24.145 "model_number": "SPDK bdev Controller", 00:19:24.145 "max_namespaces": 10, 00:19:24.145 "min_cntlid": 1, 00:19:24.145 "max_cntlid": 65519, 00:19:24.145 "ana_reporting": false 00:19:24.145 } 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "method": "nvmf_subsystem_add_host", 00:19:24.145 "params": { 00:19:24.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.145 "host": "nqn.2016-06.io.spdk:host1", 00:19:24.145 "psk": "key0" 00:19:24.145 } 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "method": "nvmf_subsystem_add_ns", 00:19:24.145 "params": { 00:19:24.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.145 "namespace": { 00:19:24.145 "nsid": 1, 00:19:24.145 "bdev_name": "malloc0", 00:19:24.145 "nguid": "9B05A54D78A34EC89D2C0D194CA4F4A0", 00:19:24.145 "uuid": "9b05a54d-78a3-4ec8-9d2c-0d194ca4f4a0", 00:19:24.145 "no_auto_visible": false 00:19:24.145 } 00:19:24.145 } 00:19:24.145 }, 00:19:24.145 { 00:19:24.145 "method": "nvmf_subsystem_add_listener", 00:19:24.145 "params": { 00:19:24.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.145 "listen_address": { 00:19:24.145 "trtype": "TCP", 00:19:24.145 "adrfam": "IPv4", 00:19:24.145 "traddr": "10.0.0.2", 00:19:24.145 "trsvcid": "4420" 00:19:24.145 }, 00:19:24.145 "secure_channel": true 00:19:24.145 } 00:19:24.145 } 00:19:24.145 ] 00:19:24.145 } 00:19:24.145 ] 00:19:24.145 }' 00:19:24.145 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.145 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=101422 00:19:24.146 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:24.146 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 101422 00:19:24.146 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 101422 ']' 00:19:24.146 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.146 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.146 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.146 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.146 09:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.146 [2024-12-11 09:57:33.586354] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:24.146 [2024-12-11 09:57:33.586402] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.146 [2024-12-11 09:57:33.664348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.146 [2024-12-11 09:57:33.703185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.146 [2024-12-11 09:57:33.703224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.146 [2024-12-11 09:57:33.703232] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.146 [2024-12-11 09:57:33.703238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.146 [2024-12-11 09:57:33.703243] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.146 [2024-12-11 09:57:33.703821] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.404 [2024-12-11 09:57:33.916631] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.404 [2024-12-11 09:57:33.948660] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:24.404 [2024-12-11 09:57:33.948856] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.970 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.970 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:24.970 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:24.970 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:24.970 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.970 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.970 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=101542 00:19:24.970 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 101542 /var/tmp/bdevperf.sock 00:19:24.970 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 101542 ']' 00:19:24.970 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.970 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:24.970 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.970 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:24.970 "subsystems": [ 00:19:24.970 { 00:19:24.970 "subsystem": "keyring", 00:19:24.970 "config": [ 00:19:24.970 { 00:19:24.970 "method": "keyring_file_add_key", 00:19:24.970 "params": { 00:19:24.970 "name": "key0", 00:19:24.970 "path": "/tmp/tmp.UcyKdYLWBg" 00:19:24.970 } 00:19:24.970 } 00:19:24.970 ] 00:19:24.970 }, 00:19:24.970 { 00:19:24.970 "subsystem": "iobuf", 00:19:24.970 "config": [ 00:19:24.970 { 00:19:24.970 "method": "iobuf_set_options", 00:19:24.970 "params": { 00:19:24.970 "small_pool_count": 8192, 00:19:24.970 "large_pool_count": 1024, 00:19:24.970 "small_bufsize": 8192, 00:19:24.970 "large_bufsize": 135168, 00:19:24.970 "enable_numa": false 00:19:24.970 } 00:19:24.970 } 00:19:24.970 ] 00:19:24.970 }, 00:19:24.970 { 00:19:24.970 "subsystem": "sock", 00:19:24.970 "config": [ 00:19:24.970 { 00:19:24.970 "method": "sock_set_default_impl", 00:19:24.970 "params": { 00:19:24.970 "impl_name": "posix" 00:19:24.970 } 00:19:24.970 }, 00:19:24.970 { 00:19:24.970 "method": "sock_impl_set_options", 00:19:24.970 "params": { 00:19:24.970 "impl_name": "ssl", 00:19:24.970 "recv_buf_size": 4096, 00:19:24.970 "send_buf_size": 4096, 00:19:24.970 "enable_recv_pipe": true, 00:19:24.971 "enable_quickack": false, 00:19:24.971 "enable_placement_id": 0, 00:19:24.971 "enable_zerocopy_send_server": true, 00:19:24.971 "enable_zerocopy_send_client": false, 00:19:24.971 "zerocopy_threshold": 0, 00:19:24.971 "tls_version": 0, 00:19:24.971 "enable_ktls": false 00:19:24.971 } 00:19:24.971 }, 00:19:24.971 { 00:19:24.971 "method": "sock_impl_set_options", 00:19:24.971 "params": { 00:19:24.971 "impl_name": "posix", 00:19:24.971 "recv_buf_size": 2097152, 00:19:24.971 "send_buf_size": 2097152, 00:19:24.971 "enable_recv_pipe": true, 00:19:24.971 "enable_quickack": false, 00:19:24.971 "enable_placement_id": 0, 00:19:24.971 "enable_zerocopy_send_server": true, 00:19:24.971 "enable_zerocopy_send_client": false, 00:19:24.971 "zerocopy_threshold": 0, 00:19:24.971 "tls_version": 0, 00:19:24.971 "enable_ktls": false 00:19:24.971 } 00:19:24.971 } 00:19:24.971 ] 00:19:24.971 }, 00:19:24.971 { 00:19:24.971 "subsystem": "vmd", 00:19:24.971 "config": [] 00:19:24.971 }, 00:19:24.971 { 00:19:24.971 "subsystem": "accel", 00:19:24.971 "config": [ 00:19:24.971 { 00:19:24.971 "method": "accel_set_options", 00:19:24.971 "params": { 00:19:24.971 "small_cache_size": 128, 00:19:24.971 "large_cache_size": 16, 00:19:24.971 "task_count": 2048, 00:19:24.971 "sequence_count": 2048, 00:19:24.971 "buf_count": 2048 00:19:24.971 } 00:19:24.971 } 00:19:24.971 ] 00:19:24.971 }, 00:19:24.971 { 00:19:24.971 "subsystem": "bdev", 00:19:24.971 "config": [ 00:19:24.971 { 00:19:24.971 "method": "bdev_set_options", 00:19:24.971 "params": { 00:19:24.971 "bdev_io_pool_size": 65535, 00:19:24.971 "bdev_io_cache_size": 256, 00:19:24.971 "bdev_auto_examine": true, 00:19:24.971 "iobuf_small_cache_size": 128, 00:19:24.971 "iobuf_large_cache_size": 16 00:19:24.971 } 00:19:24.971 }, 00:19:24.971 { 00:19:24.971 "method": "bdev_raid_set_options", 00:19:24.971 "params": { 00:19:24.971 "process_window_size_kb": 1024, 00:19:24.971 "process_max_bandwidth_mb_sec": 0 00:19:24.971 } 00:19:24.971 }, 00:19:24.971 { 00:19:24.971 "method": "bdev_iscsi_set_options", 00:19:24.971 "params": { 00:19:24.971 "timeout_sec": 30 00:19:24.971 } 00:19:24.971 }, 00:19:24.971 { 00:19:24.971 "method": "bdev_nvme_set_options", 00:19:24.971 "params": { 00:19:24.971 "action_on_timeout": "none", 00:19:24.971 "timeout_us": 0, 00:19:24.971 "timeout_admin_us": 0, 00:19:24.971 "keep_alive_timeout_ms": 10000, 00:19:24.971 "arbitration_burst": 0, 00:19:24.971 "low_priority_weight": 0, 00:19:24.971 "medium_priority_weight": 0, 00:19:24.971 "high_priority_weight": 0, 00:19:24.971 "nvme_adminq_poll_period_us": 10000, 00:19:24.971 "nvme_ioq_poll_period_us": 0, 00:19:24.971 "io_queue_requests": 512, 00:19:24.971 "delay_cmd_submit": true, 00:19:24.971 "transport_retry_count": 4, 00:19:24.971 "bdev_retry_count": 3, 00:19:24.971 "transport_ack_timeout": 0, 00:19:24.971 "ctrlr_loss_timeout_sec": 0, 00:19:24.971 "reconnect_delay_sec": 0, 00:19:24.971 "fast_io_fail_timeout_sec": 0, 00:19:24.971 "disable_auto_failback": false, 00:19:24.971 "generate_uuids": false, 00:19:24.971 "transport_tos": 0, 00:19:24.971 "nvme_error_stat": false, 00:19:24.971 "rdma_srq_size": 0, 00:19:24.971 "io_path_stat": false, 00:19:24.971 "allow_accel_sequence": false, 00:19:24.971 "rdma_max_cq_size": 0, 00:19:24.971 "rdma_cm_event_timeout_ms": 0, 00:19:24.971 "dhchap_digests": [ 00:19:24.971 "sha256", 00:19:24.971 "sha384", 00:19:24.971 "sha512" 00:19:24.971 ], 00:19:24.971 "dhchap_dhgroups": [ 00:19:24.971 "null", 00:19:24.971 "ffdhe2048", 00:19:24.971 "ffdhe3072", 00:19:24.971 "ffdhe4096", 00:19:24.971 "ffdhe6144", 00:19:24.971 "ffdhe8192" 00:19:24.971 ], 00:19:24.971 "rdma_umr_per_io": false 00:19:24.971 } 00:19:24.971 }, 00:19:24.971 { 00:19:24.971 "method": "bdev_nvme_attach_controller", 00:19:24.971 "params": { 00:19:24.971 "name": "TLSTEST", 00:19:24.971 "trtype": "TCP", 00:19:24.971 "adrfam": "IPv4", 00:19:24.971 "traddr": "10.0.0.2", 00:19:24.971 "trsvcid": "4420", 00:19:24.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.971 "prchk_reftag": false, 00:19:24.971 "prchk_guard": false, 00:19:24.971 "ctrlr_loss_timeout_sec": 0, 00:19:24.971 "reconnect_delay_sec": 0, 00:19:24.971 "fast_io_fail_timeout_sec": 0, 00:19:24.971 "psk": "key0", 00:19:24.971 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.971 "hdgst": false, 00:19:24.971 "ddgst": false, 00:19:24.971 "multipath": "multipath" 00:19:24.971 } 00:19:24.971 }, 00:19:24.971 { 00:19:24.971 "method": "bdev_nvme_set_hotplug", 00:19:24.971 "params": { 00:19:24.971 "period_us": 100000, 00:19:24.971 "enable": false 00:19:24.971 } 00:19:24.971 }, 00:19:24.971 { 00:19:24.971 "method": "bdev_wait_for_examine" 00:19:24.971 } 00:19:24.971 ] 00:19:24.971 }, 00:19:24.971 { 00:19:24.971 "subsystem": "nbd", 00:19:24.971 "config": [] 00:19:24.971 } 00:19:24.971 ] 00:19:24.971 }' 00:19:24.971 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.971 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.971 09:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.971 [2024-12-11 09:57:34.496695] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:24.971 [2024-12-11 09:57:34.496743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101542 ] 00:19:25.311 [2024-12-11 09:57:34.574819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.311 [2024-12-11 09:57:34.615211] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.311 [2024-12-11 09:57:34.767100] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.982 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.982 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:25.982 09:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:25.982 Running I/O for 10 seconds... 00:19:28.288 5531.00 IOPS, 21.61 MiB/s [2024-12-11T08:57:38.797Z] 5609.00 IOPS, 21.91 MiB/s [2024-12-11T08:57:39.731Z] 5588.67 IOPS, 21.83 MiB/s [2024-12-11T08:57:40.664Z] 5536.25 IOPS, 21.63 MiB/s [2024-12-11T08:57:41.599Z] 5544.40 IOPS, 21.66 MiB/s [2024-12-11T08:57:42.532Z] 5551.83 IOPS, 21.69 MiB/s [2024-12-11T08:57:43.466Z] 5560.29 IOPS, 21.72 MiB/s [2024-12-11T08:57:44.841Z] 5506.88 IOPS, 21.51 MiB/s [2024-12-11T08:57:45.774Z] 5441.78 IOPS, 21.26 MiB/s [2024-12-11T08:57:45.774Z] 5400.00 IOPS, 21.09 MiB/s 00:19:36.199 Latency(us) 00:19:36.199 [2024-12-11T08:57:45.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.199 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:36.199 Verification LBA range: start 0x0 length 0x2000 00:19:36.199 TLSTESTn1 : 10.02 5403.72 21.11 0.00 0.00 23652.37 6959.30 29584.82 00:19:36.199 [2024-12-11T08:57:45.774Z] =================================================================================================================== 00:19:36.199 [2024-12-11T08:57:45.774Z] Total : 5403.72 21.11 0.00 0.00 23652.37 6959.30 29584.82 00:19:36.199 { 00:19:36.199 "results": [ 00:19:36.199 { 00:19:36.199 "job": "TLSTESTn1", 00:19:36.199 "core_mask": "0x4", 00:19:36.199 "workload": "verify", 00:19:36.199 "status": "finished", 00:19:36.199 "verify_range": { 00:19:36.199 "start": 0, 00:19:36.199 "length": 8192 00:19:36.199 }, 00:19:36.199 "queue_depth": 128, 00:19:36.199 "io_size": 4096, 00:19:36.199 "runtime": 10.016625, 00:19:36.199 "iops": 5403.716321615315, 00:19:36.199 "mibps": 21.108266881309824, 00:19:36.199 "io_failed": 0, 00:19:36.199 "io_timeout": 0, 00:19:36.199 "avg_latency_us": 23652.369615797765, 00:19:36.199 "min_latency_us": 6959.299047619048, 00:19:36.199 "max_latency_us": 29584.822857142855 00:19:36.199 } 00:19:36.199 ], 00:19:36.199 "core_count": 1 00:19:36.199 } 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 101542 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 101542 ']' 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 101542 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101542 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101542' 00:19:36.199 killing process with pid 101542 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 101542 00:19:36.199 Received shutdown signal, test time was about 10.000000 seconds 00:19:36.199 00:19:36.199 Latency(us) 00:19:36.199 [2024-12-11T08:57:45.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.199 [2024-12-11T08:57:45.774Z] =================================================================================================================== 00:19:36.199 [2024-12-11T08:57:45.774Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 101542 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 101422 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 101422 ']' 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 101422 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101422 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101422' 00:19:36.199 killing process with pid 101422 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 101422 00:19:36.199 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 101422 00:19:36.458 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:36.458 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:36.458 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:36.458 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.458 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=103462 00:19:36.458 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:36.458 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 103462 00:19:36.458 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 103462 ']' 00:19:36.458 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.458 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.458 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.458 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.458 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.458 [2024-12-11 09:57:45.968570] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:36.458 [2024-12-11 09:57:45.968617] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.717 [2024-12-11 09:57:46.050144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.717 [2024-12-11 09:57:46.088705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.717 [2024-12-11 09:57:46.088741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.717 [2024-12-11 09:57:46.088748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.717 [2024-12-11 09:57:46.088754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.717 [2024-12-11 09:57:46.088758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.717 [2024-12-11 09:57:46.089322] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.717 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.717 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:36.717 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:36.717 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:36.717 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.717 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.717 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.UcyKdYLWBg 00:19:36.717 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UcyKdYLWBg 00:19:36.717 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:36.975 [2024-12-11 09:57:46.381015] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.975 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:37.233 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:37.233 [2024-12-11 09:57:46.761986] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:37.233 [2024-12-11 09:57:46.762174] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.233 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:37.492 malloc0 00:19:37.492 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:37.750 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UcyKdYLWBg 00:19:38.009 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:38.267 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:38.267 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=103742 00:19:38.267 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.267 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 103742 /var/tmp/bdevperf.sock 00:19:38.267 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 103742 ']' 00:19:38.267 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.267 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.267 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.267 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.267 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.267 [2024-12-11 09:57:47.625165] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:38.267 [2024-12-11 09:57:47.625235] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103742 ] 00:19:38.267 [2024-12-11 09:57:47.706961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.267 [2024-12-11 09:57:47.746193] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.525 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:38.525 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:38.525 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UcyKdYLWBg 00:19:38.525 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:38.783 [2024-12-11 09:57:48.198455] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:38.783 nvme0n1 00:19:38.783 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:39.041 Running I/O for 1 seconds... 00:19:39.975 5497.00 IOPS, 21.47 MiB/s 00:19:39.975 Latency(us) 00:19:39.975 [2024-12-11T08:57:49.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.975 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:39.975 Verification LBA range: start 0x0 length 0x2000 00:19:39.975 nvme0n1 : 1.01 5553.71 21.69 0.00 0.00 22896.02 5398.92 23218.47 00:19:39.975 [2024-12-11T08:57:49.550Z] =================================================================================================================== 00:19:39.975 [2024-12-11T08:57:49.550Z] Total : 5553.71 21.69 0.00 0.00 22896.02 5398.92 23218.47 00:19:39.975 { 00:19:39.975 "results": [ 00:19:39.975 { 00:19:39.975 "job": "nvme0n1", 00:19:39.975 "core_mask": "0x2", 00:19:39.975 "workload": "verify", 00:19:39.975 "status": "finished", 00:19:39.975 "verify_range": { 00:19:39.975 "start": 0, 00:19:39.975 "length": 8192 00:19:39.975 }, 00:19:39.975 "queue_depth": 128, 00:19:39.975 "io_size": 4096, 00:19:39.975 "runtime": 1.012836, 00:19:39.975 "iops": 5553.712545762592, 00:19:39.975 "mibps": 21.694189631885124, 00:19:39.975 "io_failed": 0, 00:19:39.975 "io_timeout": 0, 00:19:39.975 "avg_latency_us": 22896.023129396828, 00:19:39.975 "min_latency_us": 5398.918095238095, 00:19:39.975 "max_latency_us": 23218.46857142857 00:19:39.975 } 00:19:39.975 ], 00:19:39.975 "core_count": 1 00:19:39.975 } 00:19:39.975 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 103742 00:19:39.975 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 103742 ']' 00:19:39.975 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 103742 00:19:39.975 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:39.975 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.975 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103742 00:19:39.975 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:39.975 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:39.975 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103742' 00:19:39.975 killing process with pid 103742 00:19:39.975 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 103742 00:19:39.975 Received shutdown signal, test time was about 1.000000 seconds 00:19:39.975 00:19:39.975 Latency(us) 00:19:39.975 [2024-12-11T08:57:49.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.975 [2024-12-11T08:57:49.550Z] =================================================================================================================== 00:19:39.975 [2024-12-11T08:57:49.550Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:39.975 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 103742 00:19:40.233 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 103462 00:19:40.233 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 103462 ']' 00:19:40.233 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 103462 00:19:40.233 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:40.233 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.233 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103462 00:19:40.233 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.233 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.233 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103462' 00:19:40.233 killing process with pid 103462 00:19:40.233 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 103462 00:19:40.233 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 103462 00:19:40.492 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:40.492 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:40.492 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:40.492 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.492 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=104010 00:19:40.492 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:40.492 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 104010 00:19:40.492 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 104010 ']' 00:19:40.492 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.492 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.492 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.492 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.492 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.492 [2024-12-11 09:57:49.903319] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:40.492 [2024-12-11 09:57:49.903367] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.492 [2024-12-11 09:57:49.988382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.492 [2024-12-11 09:57:50.032094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.492 [2024-12-11 09:57:50.032131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.492 [2024-12-11 09:57:50.032138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.492 [2024-12-11 09:57:50.032145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.492 [2024-12-11 09:57:50.032150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.492 [2024-12-11 09:57:50.032581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.751 [2024-12-11 09:57:50.182349] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.751 malloc0 00:19:40.751 [2024-12-11 09:57:50.210474] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:40.751 [2024-12-11 09:57:50.210678] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=104229 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 104229 /var/tmp/bdevperf.sock 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 104229 ']' 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.751 09:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.751 [2024-12-11 09:57:50.288139] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:40.751 [2024-12-11 09:57:50.288179] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104229 ] 00:19:41.009 [2024-12-11 09:57:50.367538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.009 [2024-12-11 09:57:50.406966] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.574 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.574 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:41.574 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UcyKdYLWBg 00:19:41.832 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:42.090 [2024-12-11 09:57:51.465067] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.090 nvme0n1 00:19:42.090 09:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:42.090 Running I/O for 1 seconds... 00:19:43.464 5302.00 IOPS, 20.71 MiB/s 00:19:43.464 Latency(us) 00:19:43.464 [2024-12-11T08:57:53.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.464 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:43.464 Verification LBA range: start 0x0 length 0x2000 00:19:43.464 nvme0n1 : 1.01 5361.26 20.94 0.00 0.00 23717.53 5336.50 33204.91 00:19:43.464 [2024-12-11T08:57:53.039Z] =================================================================================================================== 00:19:43.464 [2024-12-11T08:57:53.039Z] Total : 5361.26 20.94 0.00 0.00 23717.53 5336.50 33204.91 00:19:43.464 { 00:19:43.464 "results": [ 00:19:43.464 { 00:19:43.464 "job": "nvme0n1", 00:19:43.464 "core_mask": "0x2", 00:19:43.464 "workload": "verify", 00:19:43.464 "status": "finished", 00:19:43.464 "verify_range": { 00:19:43.464 "start": 0, 00:19:43.464 "length": 8192 00:19:43.464 }, 00:19:43.464 "queue_depth": 128, 00:19:43.464 "io_size": 4096, 00:19:43.464 "runtime": 1.013008, 00:19:43.464 "iops": 5361.260720547123, 00:19:43.464 "mibps": 20.9424246896372, 00:19:43.464 "io_failed": 0, 00:19:43.464 "io_timeout": 0, 00:19:43.464 "avg_latency_us": 23717.526150932477, 00:19:43.464 "min_latency_us": 5336.5028571428575, 00:19:43.464 "max_latency_us": 33204.90666666667 00:19:43.464 } 00:19:43.464 ], 00:19:43.464 "core_count": 1 00:19:43.464 } 00:19:43.464 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:43.464 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.464 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.464 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.464 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:43.464 "subsystems": [ 00:19:43.464 { 00:19:43.464 "subsystem": "keyring", 00:19:43.464 "config": [ 00:19:43.464 { 00:19:43.465 "method": "keyring_file_add_key", 00:19:43.465 "params": { 00:19:43.465 "name": "key0", 00:19:43.465 "path": "/tmp/tmp.UcyKdYLWBg" 00:19:43.465 } 00:19:43.465 } 00:19:43.465 ] 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "subsystem": "iobuf", 00:19:43.465 "config": [ 00:19:43.465 { 00:19:43.465 "method": "iobuf_set_options", 00:19:43.465 "params": { 00:19:43.465 "small_pool_count": 8192, 00:19:43.465 "large_pool_count": 1024, 00:19:43.465 "small_bufsize": 8192, 00:19:43.465 "large_bufsize": 135168, 00:19:43.465 "enable_numa": false 00:19:43.465 } 00:19:43.465 } 00:19:43.465 ] 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "subsystem": "sock", 00:19:43.465 "config": [ 00:19:43.465 { 00:19:43.465 "method": "sock_set_default_impl", 00:19:43.465 "params": { 00:19:43.465 "impl_name": "posix" 00:19:43.465 } 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "method": "sock_impl_set_options", 00:19:43.465 "params": { 00:19:43.465 "impl_name": "ssl", 00:19:43.465 "recv_buf_size": 4096, 00:19:43.465 "send_buf_size": 4096, 00:19:43.465 "enable_recv_pipe": true, 00:19:43.465 "enable_quickack": false, 00:19:43.465 "enable_placement_id": 0, 00:19:43.465 "enable_zerocopy_send_server": true, 00:19:43.465 "enable_zerocopy_send_client": false, 00:19:43.465 "zerocopy_threshold": 0, 00:19:43.465 "tls_version": 0, 00:19:43.465 "enable_ktls": false 00:19:43.465 } 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "method": "sock_impl_set_options", 00:19:43.465 "params": { 00:19:43.465 "impl_name": "posix", 00:19:43.465 "recv_buf_size": 2097152, 00:19:43.465 "send_buf_size": 2097152, 00:19:43.465 "enable_recv_pipe": true, 00:19:43.465 "enable_quickack": false, 00:19:43.465 "enable_placement_id": 0, 00:19:43.465 "enable_zerocopy_send_server": true, 00:19:43.465 "enable_zerocopy_send_client": false, 00:19:43.465 "zerocopy_threshold": 0, 00:19:43.465 "tls_version": 0, 00:19:43.465 "enable_ktls": false 00:19:43.465 } 00:19:43.465 } 00:19:43.465 ] 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "subsystem": "vmd", 00:19:43.465 "config": [] 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "subsystem": "accel", 00:19:43.465 "config": [ 00:19:43.465 { 00:19:43.465 "method": "accel_set_options", 00:19:43.465 "params": { 00:19:43.465 "small_cache_size": 128, 00:19:43.465 "large_cache_size": 16, 00:19:43.465 "task_count": 2048, 00:19:43.465 "sequence_count": 2048, 00:19:43.465 "buf_count": 2048 00:19:43.465 } 00:19:43.465 } 00:19:43.465 ] 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "subsystem": "bdev", 00:19:43.465 "config": [ 00:19:43.465 { 00:19:43.465 "method": "bdev_set_options", 00:19:43.465 "params": { 00:19:43.465 "bdev_io_pool_size": 65535, 00:19:43.465 "bdev_io_cache_size": 256, 00:19:43.465 "bdev_auto_examine": true, 00:19:43.465 "iobuf_small_cache_size": 128, 00:19:43.465 "iobuf_large_cache_size": 16 00:19:43.465 } 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "method": "bdev_raid_set_options", 00:19:43.465 "params": { 00:19:43.465 "process_window_size_kb": 1024, 00:19:43.465 "process_max_bandwidth_mb_sec": 0 00:19:43.465 } 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "method": "bdev_iscsi_set_options", 00:19:43.465 "params": { 00:19:43.465 "timeout_sec": 30 00:19:43.465 } 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "method": "bdev_nvme_set_options", 00:19:43.465 "params": { 00:19:43.465 "action_on_timeout": "none", 00:19:43.465 "timeout_us": 0, 00:19:43.465 "timeout_admin_us": 0, 00:19:43.465 "keep_alive_timeout_ms": 10000, 00:19:43.465 "arbitration_burst": 0, 00:19:43.465 "low_priority_weight": 0, 00:19:43.465 "medium_priority_weight": 0, 00:19:43.465 "high_priority_weight": 0, 00:19:43.465 "nvme_adminq_poll_period_us": 10000, 00:19:43.465 "nvme_ioq_poll_period_us": 0, 00:19:43.465 "io_queue_requests": 0, 00:19:43.465 "delay_cmd_submit": true, 00:19:43.465 "transport_retry_count": 4, 00:19:43.465 "bdev_retry_count": 3, 00:19:43.465 "transport_ack_timeout": 0, 00:19:43.465 "ctrlr_loss_timeout_sec": 0, 00:19:43.465 "reconnect_delay_sec": 0, 00:19:43.465 "fast_io_fail_timeout_sec": 0, 00:19:43.465 "disable_auto_failback": false, 00:19:43.465 "generate_uuids": false, 00:19:43.465 "transport_tos": 0, 00:19:43.465 "nvme_error_stat": false, 00:19:43.465 "rdma_srq_size": 0, 00:19:43.465 "io_path_stat": false, 00:19:43.465 "allow_accel_sequence": false, 00:19:43.465 "rdma_max_cq_size": 0, 00:19:43.465 "rdma_cm_event_timeout_ms": 0, 00:19:43.465 "dhchap_digests": [ 00:19:43.465 "sha256", 00:19:43.465 "sha384", 00:19:43.465 "sha512" 00:19:43.465 ], 00:19:43.465 "dhchap_dhgroups": [ 00:19:43.465 "null", 00:19:43.465 "ffdhe2048", 00:19:43.465 "ffdhe3072", 00:19:43.465 "ffdhe4096", 00:19:43.465 "ffdhe6144", 00:19:43.465 "ffdhe8192" 00:19:43.465 ], 00:19:43.465 "rdma_umr_per_io": false 00:19:43.465 } 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "method": "bdev_nvme_set_hotplug", 00:19:43.465 "params": { 00:19:43.465 "period_us": 100000, 00:19:43.465 "enable": false 00:19:43.465 } 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "method": "bdev_malloc_create", 00:19:43.465 "params": { 00:19:43.465 "name": "malloc0", 00:19:43.465 "num_blocks": 8192, 00:19:43.465 "block_size": 4096, 00:19:43.465 "physical_block_size": 4096, 00:19:43.465 "uuid": "d161351e-a82a-43e0-a4c0-c05ead807e81", 00:19:43.465 "optimal_io_boundary": 0, 00:19:43.465 "md_size": 0, 00:19:43.465 "dif_type": 0, 00:19:43.465 "dif_is_head_of_md": false, 00:19:43.465 "dif_pi_format": 0 00:19:43.465 } 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "method": "bdev_wait_for_examine" 00:19:43.465 } 00:19:43.465 ] 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "subsystem": "nbd", 00:19:43.465 "config": [] 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "subsystem": "scheduler", 00:19:43.465 "config": [ 00:19:43.465 { 00:19:43.465 "method": "framework_set_scheduler", 00:19:43.465 "params": { 00:19:43.465 "name": "static" 00:19:43.465 } 00:19:43.465 } 00:19:43.465 ] 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "subsystem": "nvmf", 00:19:43.465 "config": [ 00:19:43.465 { 00:19:43.465 "method": "nvmf_set_config", 00:19:43.465 "params": { 00:19:43.465 "discovery_filter": "match_any", 00:19:43.465 "admin_cmd_passthru": { 00:19:43.465 "identify_ctrlr": false 00:19:43.465 }, 00:19:43.465 "dhchap_digests": [ 00:19:43.465 "sha256", 00:19:43.465 "sha384", 00:19:43.465 "sha512" 00:19:43.465 ], 00:19:43.465 "dhchap_dhgroups": [ 00:19:43.465 "null", 00:19:43.465 "ffdhe2048", 00:19:43.465 "ffdhe3072", 00:19:43.465 "ffdhe4096", 00:19:43.465 "ffdhe6144", 00:19:43.465 "ffdhe8192" 00:19:43.465 ] 00:19:43.465 } 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "method": "nvmf_set_max_subsystems", 00:19:43.465 "params": { 00:19:43.465 "max_subsystems": 1024 00:19:43.465 } 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "method": "nvmf_set_crdt", 00:19:43.465 "params": { 00:19:43.465 "crdt1": 0, 00:19:43.465 "crdt2": 0, 00:19:43.465 "crdt3": 0 00:19:43.465 } 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "method": "nvmf_create_transport", 00:19:43.465 "params": { 00:19:43.465 "trtype": "TCP", 00:19:43.465 "max_queue_depth": 128, 00:19:43.465 "max_io_qpairs_per_ctrlr": 127, 00:19:43.465 "in_capsule_data_size": 4096, 00:19:43.465 "max_io_size": 131072, 00:19:43.465 "io_unit_size": 131072, 00:19:43.465 "max_aq_depth": 128, 00:19:43.465 "num_shared_buffers": 511, 00:19:43.465 "buf_cache_size": 4294967295, 00:19:43.465 "dif_insert_or_strip": false, 00:19:43.465 "zcopy": false, 00:19:43.465 "c2h_success": false, 00:19:43.465 "sock_priority": 0, 00:19:43.465 "abort_timeout_sec": 1, 00:19:43.465 "ack_timeout": 0, 00:19:43.465 "data_wr_pool_size": 0 00:19:43.465 } 00:19:43.465 }, 00:19:43.465 { 00:19:43.465 "method": "nvmf_create_subsystem", 00:19:43.465 "params": { 00:19:43.465 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.465 "allow_any_host": false, 00:19:43.465 "serial_number": "00000000000000000000", 00:19:43.466 "model_number": "SPDK bdev Controller", 00:19:43.466 "max_namespaces": 32, 00:19:43.466 "min_cntlid": 1, 00:19:43.466 "max_cntlid": 65519, 00:19:43.466 "ana_reporting": false 00:19:43.466 } 00:19:43.466 }, 00:19:43.466 { 00:19:43.466 "method": "nvmf_subsystem_add_host", 00:19:43.466 "params": { 00:19:43.466 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.466 "host": "nqn.2016-06.io.spdk:host1", 00:19:43.466 "psk": "key0" 00:19:43.466 } 00:19:43.466 }, 00:19:43.466 { 00:19:43.466 "method": "nvmf_subsystem_add_ns", 00:19:43.466 "params": { 00:19:43.466 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.466 "namespace": { 00:19:43.466 "nsid": 1, 00:19:43.466 "bdev_name": "malloc0", 00:19:43.466 "nguid": "D161351EA82A43E0A4C0C05EAD807E81", 00:19:43.466 "uuid": "d161351e-a82a-43e0-a4c0-c05ead807e81", 00:19:43.466 "no_auto_visible": false 00:19:43.466 } 00:19:43.466 } 00:19:43.466 }, 00:19:43.466 { 00:19:43.466 "method": "nvmf_subsystem_add_listener", 00:19:43.466 "params": { 00:19:43.466 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.466 "listen_address": { 00:19:43.466 "trtype": "TCP", 00:19:43.466 "adrfam": "IPv4", 00:19:43.466 "traddr": "10.0.0.2", 00:19:43.466 "trsvcid": "4420" 00:19:43.466 }, 00:19:43.466 "secure_channel": false, 00:19:43.466 "sock_impl": "ssl" 00:19:43.466 } 00:19:43.466 } 00:19:43.466 ] 00:19:43.466 } 00:19:43.466 ] 00:19:43.466 }' 00:19:43.466 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:43.725 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:43.725 "subsystems": [ 00:19:43.725 { 00:19:43.725 "subsystem": "keyring", 00:19:43.725 "config": [ 00:19:43.725 { 00:19:43.725 "method": "keyring_file_add_key", 00:19:43.725 "params": { 00:19:43.725 "name": "key0", 00:19:43.725 "path": "/tmp/tmp.UcyKdYLWBg" 00:19:43.725 } 00:19:43.725 } 00:19:43.725 ] 00:19:43.725 }, 00:19:43.725 { 00:19:43.725 "subsystem": "iobuf", 00:19:43.725 "config": [ 00:19:43.725 { 00:19:43.725 "method": "iobuf_set_options", 00:19:43.725 "params": { 00:19:43.725 "small_pool_count": 8192, 00:19:43.725 "large_pool_count": 1024, 00:19:43.725 "small_bufsize": 8192, 00:19:43.725 "large_bufsize": 135168, 00:19:43.725 "enable_numa": false 00:19:43.725 } 00:19:43.725 } 00:19:43.725 ] 00:19:43.725 }, 00:19:43.725 { 00:19:43.725 "subsystem": "sock", 00:19:43.725 "config": [ 00:19:43.725 { 00:19:43.725 "method": "sock_set_default_impl", 00:19:43.725 "params": { 00:19:43.725 "impl_name": "posix" 00:19:43.725 } 00:19:43.725 }, 00:19:43.725 { 00:19:43.725 "method": "sock_impl_set_options", 00:19:43.725 "params": { 00:19:43.725 "impl_name": "ssl", 00:19:43.725 "recv_buf_size": 4096, 00:19:43.725 "send_buf_size": 4096, 00:19:43.725 "enable_recv_pipe": true, 00:19:43.725 "enable_quickack": false, 00:19:43.725 "enable_placement_id": 0, 00:19:43.725 "enable_zerocopy_send_server": true, 00:19:43.725 "enable_zerocopy_send_client": false, 00:19:43.725 "zerocopy_threshold": 0, 00:19:43.725 "tls_version": 0, 00:19:43.725 "enable_ktls": false 00:19:43.725 } 00:19:43.725 }, 00:19:43.725 { 00:19:43.725 "method": "sock_impl_set_options", 00:19:43.725 "params": { 00:19:43.725 "impl_name": "posix", 00:19:43.725 "recv_buf_size": 2097152, 00:19:43.725 "send_buf_size": 2097152, 00:19:43.725 "enable_recv_pipe": true, 00:19:43.725 "enable_quickack": false, 00:19:43.725 "enable_placement_id": 0, 00:19:43.725 "enable_zerocopy_send_server": true, 00:19:43.725 "enable_zerocopy_send_client": false, 00:19:43.725 "zerocopy_threshold": 0, 00:19:43.725 "tls_version": 0, 00:19:43.725 "enable_ktls": false 00:19:43.725 } 00:19:43.725 } 00:19:43.725 ] 00:19:43.725 }, 00:19:43.725 { 00:19:43.725 "subsystem": "vmd", 00:19:43.725 "config": [] 00:19:43.725 }, 00:19:43.725 { 00:19:43.725 "subsystem": "accel", 00:19:43.725 "config": [ 00:19:43.725 { 00:19:43.725 "method": "accel_set_options", 00:19:43.725 "params": { 00:19:43.725 "small_cache_size": 128, 00:19:43.725 "large_cache_size": 16, 00:19:43.725 "task_count": 2048, 00:19:43.725 "sequence_count": 2048, 00:19:43.725 "buf_count": 2048 00:19:43.725 } 00:19:43.725 } 00:19:43.725 ] 00:19:43.725 }, 00:19:43.725 { 00:19:43.725 "subsystem": "bdev", 00:19:43.725 "config": [ 00:19:43.725 { 00:19:43.725 "method": "bdev_set_options", 00:19:43.725 "params": { 00:19:43.725 "bdev_io_pool_size": 65535, 00:19:43.725 "bdev_io_cache_size": 256, 00:19:43.725 "bdev_auto_examine": true, 00:19:43.725 "iobuf_small_cache_size": 128, 00:19:43.725 "iobuf_large_cache_size": 16 00:19:43.725 } 00:19:43.725 }, 00:19:43.725 { 00:19:43.725 "method": "bdev_raid_set_options", 00:19:43.725 "params": { 00:19:43.725 "process_window_size_kb": 1024, 00:19:43.725 "process_max_bandwidth_mb_sec": 0 00:19:43.725 } 00:19:43.725 }, 00:19:43.725 { 00:19:43.725 "method": "bdev_iscsi_set_options", 00:19:43.725 "params": { 00:19:43.725 "timeout_sec": 30 00:19:43.725 } 00:19:43.725 }, 00:19:43.725 { 00:19:43.725 "method": "bdev_nvme_set_options", 00:19:43.725 "params": { 00:19:43.725 "action_on_timeout": "none", 00:19:43.725 "timeout_us": 0, 00:19:43.725 "timeout_admin_us": 0, 00:19:43.725 "keep_alive_timeout_ms": 10000, 00:19:43.725 "arbitration_burst": 0, 00:19:43.725 "low_priority_weight": 0, 00:19:43.725 "medium_priority_weight": 0, 00:19:43.725 "high_priority_weight": 0, 00:19:43.725 "nvme_adminq_poll_period_us": 10000, 00:19:43.725 "nvme_ioq_poll_period_us": 0, 00:19:43.725 "io_queue_requests": 512, 00:19:43.725 "delay_cmd_submit": true, 00:19:43.725 "transport_retry_count": 4, 00:19:43.725 "bdev_retry_count": 3, 00:19:43.725 "transport_ack_timeout": 0, 00:19:43.725 "ctrlr_loss_timeout_sec": 0, 00:19:43.725 "reconnect_delay_sec": 0, 00:19:43.725 "fast_io_fail_timeout_sec": 0, 00:19:43.725 "disable_auto_failback": false, 00:19:43.725 "generate_uuids": false, 00:19:43.725 "transport_tos": 0, 00:19:43.725 "nvme_error_stat": false, 00:19:43.725 "rdma_srq_size": 0, 00:19:43.725 "io_path_stat": false, 00:19:43.725 "allow_accel_sequence": false, 00:19:43.725 "rdma_max_cq_size": 0, 00:19:43.725 "rdma_cm_event_timeout_ms": 0, 00:19:43.725 "dhchap_digests": [ 00:19:43.725 "sha256", 00:19:43.725 "sha384", 00:19:43.725 "sha512" 00:19:43.725 ], 00:19:43.725 "dhchap_dhgroups": [ 00:19:43.725 "null", 00:19:43.725 "ffdhe2048", 00:19:43.725 "ffdhe3072", 00:19:43.725 "ffdhe4096", 00:19:43.725 "ffdhe6144", 00:19:43.725 "ffdhe8192" 00:19:43.725 ], 00:19:43.725 "rdma_umr_per_io": false 00:19:43.725 } 00:19:43.725 }, 00:19:43.725 { 00:19:43.725 "method": "bdev_nvme_attach_controller", 00:19:43.725 "params": { 00:19:43.725 "name": "nvme0", 00:19:43.725 "trtype": "TCP", 00:19:43.725 "adrfam": "IPv4", 00:19:43.725 "traddr": "10.0.0.2", 00:19:43.725 "trsvcid": "4420", 00:19:43.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.725 "prchk_reftag": false, 00:19:43.725 "prchk_guard": false, 00:19:43.725 "ctrlr_loss_timeout_sec": 0, 00:19:43.725 "reconnect_delay_sec": 0, 00:19:43.725 "fast_io_fail_timeout_sec": 0, 00:19:43.725 "psk": "key0", 00:19:43.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.725 "hdgst": false, 00:19:43.725 "ddgst": false, 00:19:43.725 "multipath": "multipath" 00:19:43.725 } 00:19:43.725 }, 00:19:43.725 { 00:19:43.725 "method": "bdev_nvme_set_hotplug", 00:19:43.725 "params": { 00:19:43.725 "period_us": 100000, 00:19:43.725 "enable": false 00:19:43.725 } 00:19:43.725 }, 00:19:43.726 { 00:19:43.726 "method": "bdev_enable_histogram", 00:19:43.726 "params": { 00:19:43.726 "name": "nvme0n1", 00:19:43.726 "enable": true 00:19:43.726 } 00:19:43.726 }, 00:19:43.726 { 00:19:43.726 "method": "bdev_wait_for_examine" 00:19:43.726 } 00:19:43.726 ] 00:19:43.726 }, 00:19:43.726 { 00:19:43.726 "subsystem": "nbd", 00:19:43.726 "config": [] 00:19:43.726 } 00:19:43.726 ] 00:19:43.726 }' 00:19:43.726 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 104229 00:19:43.726 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 104229 ']' 00:19:43.726 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 104229 00:19:43.726 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:43.726 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.726 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104229 00:19:43.726 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:43.726 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:43.726 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104229' 00:19:43.726 killing process with pid 104229 00:19:43.726 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 104229 00:19:43.726 Received shutdown signal, test time was about 1.000000 seconds 00:19:43.726 00:19:43.726 Latency(us) 00:19:43.726 [2024-12-11T08:57:53.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.726 [2024-12-11T08:57:53.301Z] =================================================================================================================== 00:19:43.726 [2024-12-11T08:57:53.301Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:43.726 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 104229 00:19:43.726 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 104010 00:19:43.726 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 104010 ']' 00:19:43.726 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 104010 00:19:43.726 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:43.726 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.726 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104010 00:19:43.985 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:43.985 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:43.985 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104010' 00:19:43.985 killing process with pid 104010 00:19:43.985 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 104010 00:19:43.985 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 104010 00:19:43.985 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:43.985 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:43.985 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:43.985 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:43.985 "subsystems": [ 00:19:43.985 { 00:19:43.985 "subsystem": "keyring", 00:19:43.985 "config": [ 00:19:43.985 { 00:19:43.985 "method": "keyring_file_add_key", 00:19:43.985 "params": { 00:19:43.985 "name": "key0", 00:19:43.985 "path": "/tmp/tmp.UcyKdYLWBg" 00:19:43.985 } 00:19:43.985 } 00:19:43.985 ] 00:19:43.985 }, 00:19:43.985 { 00:19:43.985 "subsystem": "iobuf", 00:19:43.985 "config": [ 00:19:43.985 { 00:19:43.985 "method": "iobuf_set_options", 00:19:43.985 "params": { 00:19:43.985 "small_pool_count": 8192, 00:19:43.985 "large_pool_count": 1024, 00:19:43.985 "small_bufsize": 8192, 00:19:43.985 "large_bufsize": 135168, 00:19:43.985 "enable_numa": false 00:19:43.985 } 00:19:43.985 } 00:19:43.985 ] 00:19:43.985 }, 00:19:43.985 { 00:19:43.986 "subsystem": "sock", 00:19:43.986 "config": [ 00:19:43.986 { 00:19:43.986 "method": "sock_set_default_impl", 00:19:43.986 "params": { 00:19:43.986 "impl_name": "posix" 00:19:43.986 } 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "method": "sock_impl_set_options", 00:19:43.986 "params": { 00:19:43.986 "impl_name": "ssl", 00:19:43.986 "recv_buf_size": 4096, 00:19:43.986 "send_buf_size": 4096, 00:19:43.986 "enable_recv_pipe": true, 00:19:43.986 "enable_quickack": false, 00:19:43.986 "enable_placement_id": 0, 00:19:43.986 "enable_zerocopy_send_server": true, 00:19:43.986 "enable_zerocopy_send_client": false, 00:19:43.986 "zerocopy_threshold": 0, 00:19:43.986 "tls_version": 0, 00:19:43.986 "enable_ktls": false 00:19:43.986 } 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "method": "sock_impl_set_options", 00:19:43.986 "params": { 00:19:43.986 "impl_name": "posix", 00:19:43.986 "recv_buf_size": 2097152, 00:19:43.986 "send_buf_size": 2097152, 00:19:43.986 "enable_recv_pipe": true, 00:19:43.986 "enable_quickack": false, 00:19:43.986 "enable_placement_id": 0, 00:19:43.986 "enable_zerocopy_send_server": true, 00:19:43.986 "enable_zerocopy_send_client": false, 00:19:43.986 "zerocopy_threshold": 0, 00:19:43.986 "tls_version": 0, 00:19:43.986 "enable_ktls": false 00:19:43.986 } 00:19:43.986 } 00:19:43.986 ] 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "subsystem": "vmd", 00:19:43.986 "config": [] 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "subsystem": "accel", 00:19:43.986 "config": [ 00:19:43.986 { 00:19:43.986 "method": "accel_set_options", 00:19:43.986 "params": { 00:19:43.986 "small_cache_size": 128, 00:19:43.986 "large_cache_size": 16, 00:19:43.986 "task_count": 2048, 00:19:43.986 "sequence_count": 2048, 00:19:43.986 "buf_count": 2048 00:19:43.986 } 00:19:43.986 } 00:19:43.986 ] 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "subsystem": "bdev", 00:19:43.986 "config": [ 00:19:43.986 { 00:19:43.986 "method": "bdev_set_options", 00:19:43.986 "params": { 00:19:43.986 "bdev_io_pool_size": 65535, 00:19:43.986 "bdev_io_cache_size": 256, 00:19:43.986 "bdev_auto_examine": true, 00:19:43.986 "iobuf_small_cache_size": 128, 00:19:43.986 "iobuf_large_cache_size": 16 00:19:43.986 } 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "method": "bdev_raid_set_options", 00:19:43.986 "params": { 00:19:43.986 "process_window_size_kb": 1024, 00:19:43.986 "process_max_bandwidth_mb_sec": 0 00:19:43.986 } 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "method": "bdev_iscsi_set_options", 00:19:43.986 "params": { 00:19:43.986 "timeout_sec": 30 00:19:43.986 } 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "method": "bdev_nvme_set_options", 00:19:43.986 "params": { 00:19:43.986 "action_on_timeout": "none", 00:19:43.986 "timeout_us": 0, 00:19:43.986 "timeout_admin_us": 0, 00:19:43.986 "keep_alive_timeout_ms": 10000, 00:19:43.986 "arbitration_burst": 0, 00:19:43.986 "low_priority_weight": 0, 00:19:43.986 "medium_priority_weight": 0, 00:19:43.986 "high_priority_weight": 0, 00:19:43.986 "nvme_adminq_poll_period_us": 10000, 00:19:43.986 "nvme_ioq_poll_period_us": 0, 00:19:43.986 "io_queue_requests": 0, 00:19:43.986 "delay_cmd_submit": true, 00:19:43.986 "transport_retry_count": 4, 00:19:43.986 "bdev_retry_count": 3, 00:19:43.986 "transport_ack_timeout": 0, 00:19:43.986 "ctrlr_loss_timeout_sec": 0, 00:19:43.986 "reconnect_delay_sec": 0, 00:19:43.986 "fast_io_fail_timeout_sec": 0, 00:19:43.986 "disable_auto_failback": false, 00:19:43.986 "generate_uuids": false, 00:19:43.986 "transport_tos": 0, 00:19:43.986 "nvme_error_stat": false, 00:19:43.986 "rdma_srq_size": 0, 00:19:43.986 "io_path_stat": false, 00:19:43.986 "allow_accel_sequence": false, 00:19:43.986 "rdma_max_cq_size": 0, 00:19:43.986 "rdma_cm_event_timeout_ms": 0, 00:19:43.986 "dhchap_digests": [ 00:19:43.986 "sha256", 00:19:43.986 "sha384", 00:19:43.986 "sha512" 00:19:43.986 ], 00:19:43.986 "dhchap_dhgroups": [ 00:19:43.986 "null", 00:19:43.986 "ffdhe2048", 00:19:43.986 "ffdhe3072", 00:19:43.986 "ffdhe4096", 00:19:43.986 "ffdhe6144", 00:19:43.986 "ffdhe8192" 00:19:43.986 ], 00:19:43.986 "rdma_umr_per_io": false 00:19:43.986 } 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "method": "bdev_nvme_set_hotplug", 00:19:43.986 "params": { 00:19:43.986 "period_us": 100000, 00:19:43.986 "enable": false 00:19:43.986 } 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "method": "bdev_malloc_create", 00:19:43.986 "params": { 00:19:43.986 "name": "malloc0", 00:19:43.986 "num_blocks": 8192, 00:19:43.986 "block_size": 4096, 00:19:43.986 "physical_block_size": 4096, 00:19:43.986 "uuid": "d161351e-a82a-43e0-a4c0-c05ead807e81", 00:19:43.986 "optimal_io_boundary": 0, 00:19:43.986 "md_size": 0, 00:19:43.986 "dif_type": 0, 00:19:43.986 "dif_is_head_of_md": false, 00:19:43.986 "dif_pi_format": 0 00:19:43.986 } 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "method": "bdev_wait_for_examine" 00:19:43.986 } 00:19:43.986 ] 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "subsystem": "nbd", 00:19:43.986 "config": [] 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "subsystem": "scheduler", 00:19:43.986 "config": [ 00:19:43.986 { 00:19:43.986 "method": "framework_set_scheduler", 00:19:43.986 "params": { 00:19:43.986 "name": "static" 00:19:43.986 } 00:19:43.986 } 00:19:43.986 ] 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "subsystem": "nvmf", 00:19:43.986 "config": [ 00:19:43.986 { 00:19:43.986 "method": "nvmf_set_config", 00:19:43.986 "params": { 00:19:43.986 "discovery_filter": "match_any", 00:19:43.986 "admin_cmd_passthru": { 00:19:43.986 "identify_ctrlr": false 00:19:43.986 }, 00:19:43.986 "dhchap_digests": [ 00:19:43.986 "sha256", 00:19:43.986 "sha384", 00:19:43.986 "sha512" 00:19:43.986 ], 00:19:43.986 "dhchap_dhgroups": [ 00:19:43.986 "null", 00:19:43.986 "ffdhe2048", 00:19:43.986 "ffdhe3072", 00:19:43.986 "ffdhe4096", 00:19:43.986 "ffdhe6144", 00:19:43.986 "ffdhe8192" 00:19:43.986 ] 00:19:43.986 } 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "method": "nvmf_set_max_subsystems", 00:19:43.986 "params": { 00:19:43.986 "max_subsystems": 1024 00:19:43.986 } 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "method": "nvmf_set_crdt", 00:19:43.986 "params": { 00:19:43.986 "crdt1": 0, 00:19:43.986 "crdt2": 0, 00:19:43.986 "crdt3": 0 00:19:43.986 } 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "method": "nvmf_create_transport", 00:19:43.986 "params": { 00:19:43.986 "trtype": "TCP", 00:19:43.986 "max_queue_depth": 128, 00:19:43.986 "max_io_qpairs_per_ctrlr": 127, 00:19:43.986 "in_capsule_data_size": 4096, 00:19:43.986 "max_io_size": 131072, 00:19:43.986 "io_unit_size": 131072, 00:19:43.986 "max_aq_depth": 128, 00:19:43.986 "num_shared_buffers": 511, 00:19:43.986 "buf_cache_size": 4294967295, 00:19:43.986 "dif_insert_or_strip": false, 00:19:43.986 "zcopy": false, 00:19:43.986 "c2h_success": false, 00:19:43.986 "sock_priority": 0, 00:19:43.986 "abort_timeout_sec": 1, 00:19:43.986 "ack_timeout": 0, 00:19:43.986 "data_wr_pool_size": 0 00:19:43.986 } 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "method": "nvmf_create_subsystem", 00:19:43.986 "params": { 00:19:43.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.986 "allow_any_host": false, 00:19:43.986 "serial_number": "00000000000000000000", 00:19:43.986 "model_number": "SPDK bdev Controller", 00:19:43.986 "max_namespaces": 32, 00:19:43.986 "min_cntlid": 1, 00:19:43.986 "max_cntlid": 65519, 00:19:43.986 "ana_reporting": false 00:19:43.986 } 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "method": "nvmf_subsystem_add_host", 00:19:43.986 "params": { 00:19:43.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.986 "host": "nqn.2016-06.io.spdk:host1", 00:19:43.986 "psk": "key0" 00:19:43.986 } 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "method": "nvmf_subsystem_add_ns", 00:19:43.986 "params": { 00:19:43.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.986 "namespace": { 00:19:43.986 "nsid": 1, 00:19:43.986 "bdev_name": "malloc0", 00:19:43.986 "nguid": "D161351EA82A43E0A4C0C05EAD807E81", 00:19:43.986 "uuid": "d161351e-a82a-43e0-a4c0-c05ead807e81", 00:19:43.986 "no_auto_visible": false 00:19:43.986 } 00:19:43.986 } 00:19:43.986 }, 00:19:43.986 { 00:19:43.986 "method": "nvmf_subsystem_add_listener", 00:19:43.986 "params": { 00:19:43.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.986 "listen_address": { 00:19:43.986 "trtype": "TCP", 00:19:43.986 "adrfam": "IPv4", 00:19:43.986 "traddr": "10.0.0.2", 00:19:43.986 "trsvcid": "4420" 00:19:43.986 }, 00:19:43.986 "secure_channel": false, 00:19:43.986 "sock_impl": "ssl" 00:19:43.986 } 00:19:43.986 } 00:19:43.986 ] 00:19:43.986 } 00:19:43.986 ] 00:19:43.986 }' 00:19:43.986 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.986 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=104705 00:19:43.986 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 104705 00:19:43.987 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:43.987 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 104705 ']' 00:19:43.987 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.987 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.987 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.987 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.987 09:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.987 [2024-12-11 09:57:53.544574] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:43.987 [2024-12-11 09:57:53.544620] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.246 [2024-12-11 09:57:53.628798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.246 [2024-12-11 09:57:53.667789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.246 [2024-12-11 09:57:53.667823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.246 [2024-12-11 09:57:53.667830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.246 [2024-12-11 09:57:53.667836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.246 [2024-12-11 09:57:53.667841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.246 [2024-12-11 09:57:53.668432] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.504 [2024-12-11 09:57:53.881583] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.504 [2024-12-11 09:57:53.913619] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:44.504 [2024-12-11 09:57:53.913826] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.072 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.072 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:45.072 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:45.072 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:45.072 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.072 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.072 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=104947 00:19:45.072 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 104947 /var/tmp/bdevperf.sock 00:19:45.072 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 104947 ']' 00:19:45.072 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.072 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:45.072 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.072 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.072 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:45.072 "subsystems": [ 00:19:45.072 { 00:19:45.072 "subsystem": "keyring", 00:19:45.072 "config": [ 00:19:45.072 { 00:19:45.072 "method": "keyring_file_add_key", 00:19:45.072 "params": { 00:19:45.072 "name": "key0", 00:19:45.072 "path": "/tmp/tmp.UcyKdYLWBg" 00:19:45.072 } 00:19:45.072 } 00:19:45.072 ] 00:19:45.072 }, 00:19:45.072 { 00:19:45.072 "subsystem": "iobuf", 00:19:45.072 "config": [ 00:19:45.072 { 00:19:45.072 "method": "iobuf_set_options", 00:19:45.072 "params": { 00:19:45.072 "small_pool_count": 8192, 00:19:45.072 "large_pool_count": 1024, 00:19:45.072 "small_bufsize": 8192, 00:19:45.072 "large_bufsize": 135168, 00:19:45.072 "enable_numa": false 00:19:45.072 } 00:19:45.072 } 00:19:45.072 ] 00:19:45.072 }, 00:19:45.072 { 00:19:45.072 "subsystem": "sock", 00:19:45.072 "config": [ 00:19:45.072 { 00:19:45.072 "method": "sock_set_default_impl", 00:19:45.072 "params": { 00:19:45.072 "impl_name": "posix" 00:19:45.072 } 00:19:45.072 }, 00:19:45.072 { 00:19:45.072 "method": "sock_impl_set_options", 00:19:45.072 "params": { 00:19:45.072 "impl_name": "ssl", 00:19:45.072 "recv_buf_size": 4096, 00:19:45.072 "send_buf_size": 4096, 00:19:45.072 "enable_recv_pipe": true, 00:19:45.072 "enable_quickack": false, 00:19:45.072 "enable_placement_id": 0, 00:19:45.072 "enable_zerocopy_send_server": true, 00:19:45.072 "enable_zerocopy_send_client": false, 00:19:45.072 "zerocopy_threshold": 0, 00:19:45.072 "tls_version": 0, 00:19:45.072 "enable_ktls": false 00:19:45.072 } 00:19:45.072 }, 00:19:45.072 { 00:19:45.072 "method": "sock_impl_set_options", 00:19:45.072 "params": { 00:19:45.072 "impl_name": "posix", 00:19:45.072 "recv_buf_size": 2097152, 00:19:45.072 "send_buf_size": 2097152, 00:19:45.072 "enable_recv_pipe": true, 00:19:45.072 "enable_quickack": false, 00:19:45.072 "enable_placement_id": 0, 00:19:45.072 "enable_zerocopy_send_server": true, 00:19:45.072 "enable_zerocopy_send_client": false, 00:19:45.072 "zerocopy_threshold": 0, 00:19:45.072 "tls_version": 0, 00:19:45.072 "enable_ktls": false 00:19:45.072 } 00:19:45.072 } 00:19:45.072 ] 00:19:45.072 }, 00:19:45.072 { 00:19:45.072 "subsystem": "vmd", 00:19:45.072 "config": [] 00:19:45.072 }, 00:19:45.072 { 00:19:45.072 "subsystem": "accel", 00:19:45.072 "config": [ 00:19:45.072 { 00:19:45.072 "method": "accel_set_options", 00:19:45.072 "params": { 00:19:45.072 "small_cache_size": 128, 00:19:45.072 "large_cache_size": 16, 00:19:45.072 "task_count": 2048, 00:19:45.072 "sequence_count": 2048, 00:19:45.072 "buf_count": 2048 00:19:45.072 } 00:19:45.072 } 00:19:45.072 ] 00:19:45.072 }, 00:19:45.072 { 00:19:45.072 "subsystem": "bdev", 00:19:45.072 "config": [ 00:19:45.072 { 00:19:45.072 "method": "bdev_set_options", 00:19:45.072 "params": { 00:19:45.072 "bdev_io_pool_size": 65535, 00:19:45.072 "bdev_io_cache_size": 256, 00:19:45.072 "bdev_auto_examine": true, 00:19:45.072 "iobuf_small_cache_size": 128, 00:19:45.072 "iobuf_large_cache_size": 16 00:19:45.072 } 00:19:45.072 }, 00:19:45.072 { 00:19:45.072 "method": "bdev_raid_set_options", 00:19:45.072 "params": { 00:19:45.072 "process_window_size_kb": 1024, 00:19:45.072 "process_max_bandwidth_mb_sec": 0 00:19:45.072 } 00:19:45.072 }, 00:19:45.072 { 00:19:45.072 "method": "bdev_iscsi_set_options", 00:19:45.072 "params": { 00:19:45.072 "timeout_sec": 30 00:19:45.072 } 00:19:45.072 }, 00:19:45.072 { 00:19:45.072 "method": "bdev_nvme_set_options", 00:19:45.072 "params": { 00:19:45.072 "action_on_timeout": "none", 00:19:45.072 "timeout_us": 0, 00:19:45.072 "timeout_admin_us": 0, 00:19:45.073 "keep_alive_timeout_ms": 10000, 00:19:45.073 "arbitration_burst": 0, 00:19:45.073 "low_priority_weight": 0, 00:19:45.073 "medium_priority_weight": 0, 00:19:45.073 "high_priority_weight": 0, 00:19:45.073 "nvme_adminq_poll_period_us": 10000, 00:19:45.073 "nvme_ioq_poll_period_us": 0, 00:19:45.073 "io_queue_requests": 512, 00:19:45.073 "delay_cmd_submit": true, 00:19:45.073 "transport_retry_count": 4, 00:19:45.073 "bdev_retry_count": 3, 00:19:45.073 "transport_ack_timeout": 0, 00:19:45.073 "ctrlr_loss_timeout_sec": 0, 00:19:45.073 "reconnect_delay_sec": 0, 00:19:45.073 "fast_io_fail_timeout_sec": 0, 00:19:45.073 "disable_auto_failback": false, 00:19:45.073 "generate_uuids": false, 00:19:45.073 "transport_tos": 0, 00:19:45.073 "nvme_error_stat": false, 00:19:45.073 "rdma_srq_size": 0, 00:19:45.073 "io_path_stat": false, 00:19:45.073 "allow_accel_sequence": false, 00:19:45.073 "rdma_max_cq_size": 0, 00:19:45.073 "rdma_cm_event_timeout_ms": 0, 00:19:45.073 "dhchap_digests": [ 00:19:45.073 "sha256", 00:19:45.073 "sha384", 00:19:45.073 "sha512" 00:19:45.073 ], 00:19:45.073 "dhchap_dhgroups": [ 00:19:45.073 "null", 00:19:45.073 "ffdhe2048", 00:19:45.073 "ffdhe3072", 00:19:45.073 "ffdhe4096", 00:19:45.073 "ffdhe6144", 00:19:45.073 "ffdhe8192" 00:19:45.073 ], 00:19:45.073 "rdma_umr_per_io": false 00:19:45.073 } 00:19:45.073 }, 00:19:45.073 { 00:19:45.073 "method": "bdev_nvme_attach_controller", 00:19:45.073 "params": { 00:19:45.073 "name": "nvme0", 00:19:45.073 "trtype": "TCP", 00:19:45.073 "adrfam": "IPv4", 00:19:45.073 "traddr": "10.0.0.2", 00:19:45.073 "trsvcid": "4420", 00:19:45.073 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.073 "prchk_reftag": false, 00:19:45.073 "prchk_guard": false, 00:19:45.073 "ctrlr_loss_timeout_sec": 0, 00:19:45.073 "reconnect_delay_sec": 0, 00:19:45.073 "fast_io_fail_timeout_sec": 0, 00:19:45.073 "psk": "key0", 00:19:45.073 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:45.073 "hdgst": false, 00:19:45.073 "ddgst": false, 00:19:45.073 "multipath": "multipath" 00:19:45.073 } 00:19:45.073 }, 00:19:45.073 { 00:19:45.073 "method": "bdev_nvme_set_hotplug", 00:19:45.073 "params": { 00:19:45.073 "period_us": 100000, 00:19:45.073 "enable": false 00:19:45.073 } 00:19:45.073 }, 00:19:45.073 { 00:19:45.073 "method": "bdev_enable_histogram", 00:19:45.073 "params": { 00:19:45.073 "name": "nvme0n1", 00:19:45.073 "enable": true 00:19:45.073 } 00:19:45.073 }, 00:19:45.073 { 00:19:45.073 "method": "bdev_wait_for_examine" 00:19:45.073 } 00:19:45.073 ] 00:19:45.073 }, 00:19:45.073 { 00:19:45.073 "subsystem": "nbd", 00:19:45.073 "config": [] 00:19:45.073 } 00:19:45.073 ] 00:19:45.073 }' 00:19:45.073 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.073 09:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.073 [2024-12-11 09:57:54.452572] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:45.073 [2024-12-11 09:57:54.452618] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104947 ] 00:19:45.073 [2024-12-11 09:57:54.529686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.073 [2024-12-11 09:57:54.568551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.331 [2024-12-11 09:57:54.721454] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:45.895 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.895 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:45.895 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:45.895 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:45.895 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.895 09:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:46.153 Running I/O for 1 seconds... 00:19:47.085 5325.00 IOPS, 20.80 MiB/s 00:19:47.085 Latency(us) 00:19:47.085 [2024-12-11T08:57:56.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.085 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:47.085 Verification LBA range: start 0x0 length 0x2000 00:19:47.085 nvme0n1 : 1.01 5382.14 21.02 0.00 0.00 23622.30 5305.30 21970.16 00:19:47.085 [2024-12-11T08:57:56.660Z] =================================================================================================================== 00:19:47.085 [2024-12-11T08:57:56.660Z] Total : 5382.14 21.02 0.00 0.00 23622.30 5305.30 21970.16 00:19:47.085 { 00:19:47.085 "results": [ 00:19:47.085 { 00:19:47.085 "job": "nvme0n1", 00:19:47.085 "core_mask": "0x2", 00:19:47.085 "workload": "verify", 00:19:47.085 "status": "finished", 00:19:47.085 "verify_range": { 00:19:47.085 "start": 0, 00:19:47.085 "length": 8192 00:19:47.085 }, 00:19:47.085 "queue_depth": 128, 00:19:47.085 "io_size": 4096, 00:19:47.085 "runtime": 1.013166, 00:19:47.085 "iops": 5382.138761071729, 00:19:47.085 "mibps": 21.023979535436442, 00:19:47.085 "io_failed": 0, 00:19:47.085 "io_timeout": 0, 00:19:47.085 "avg_latency_us": 23622.301915066415, 00:19:47.085 "min_latency_us": 5305.295238095238, 00:19:47.085 "max_latency_us": 21970.16380952381 00:19:47.085 } 00:19:47.085 ], 00:19:47.085 "core_count": 1 00:19:47.085 } 00:19:47.085 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:47.085 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:47.085 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:47.085 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:47.085 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:47.085 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:47.085 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:47.085 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:47.085 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:47.085 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:47.085 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:47.085 nvmf_trace.0 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 104947 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 104947 ']' 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 104947 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104947 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104947' 00:19:47.343 killing process with pid 104947 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 104947 00:19:47.343 Received shutdown signal, test time was about 1.000000 seconds 00:19:47.343 00:19:47.343 Latency(us) 00:19:47.343 [2024-12-11T08:57:56.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.343 [2024-12-11T08:57:56.918Z] =================================================================================================================== 00:19:47.343 [2024-12-11T08:57:56.918Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 104947 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:47.343 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:47.343 rmmod nvme_tcp 00:19:47.601 rmmod nvme_fabrics 00:19:47.601 rmmod nvme_keyring 00:19:47.601 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:47.601 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:47.601 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:47.601 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 104705 ']' 00:19:47.601 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 104705 00:19:47.601 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 104705 ']' 00:19:47.601 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 104705 00:19:47.601 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:47.601 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.601 09:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104705 00:19:47.601 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:47.601 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:47.601 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104705' 00:19:47.601 killing process with pid 104705 00:19:47.601 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 104705 00:19:47.601 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 104705 00:19:47.601 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:47.601 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:47.602 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:47.602 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:47.861 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:47.861 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:47.861 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:47.861 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:47.861 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:47.861 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.861 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.861 09:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.766 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:49.766 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.rwTZwzuK5Z /tmp/tmp.dKZyChZVxZ /tmp/tmp.UcyKdYLWBg 00:19:49.766 00:19:49.766 real 1m22.743s 00:19:49.766 user 2m5.170s 00:19:49.766 sys 0m31.480s 00:19:49.766 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:49.766 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.766 ************************************ 00:19:49.766 END TEST nvmf_tls 00:19:49.766 ************************************ 00:19:49.766 09:57:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:49.767 09:57:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:49.767 09:57:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:49.767 09:57:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:49.767 ************************************ 00:19:49.767 START TEST nvmf_fips 00:19:49.767 ************************************ 00:19:49.767 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:50.027 * Looking for test storage... 00:19:50.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:50.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.027 --rc genhtml_branch_coverage=1 00:19:50.027 --rc genhtml_function_coverage=1 00:19:50.027 --rc genhtml_legend=1 00:19:50.027 --rc geninfo_all_blocks=1 00:19:50.027 --rc geninfo_unexecuted_blocks=1 00:19:50.027 00:19:50.027 ' 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:50.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.027 --rc genhtml_branch_coverage=1 00:19:50.027 --rc genhtml_function_coverage=1 00:19:50.027 --rc genhtml_legend=1 00:19:50.027 --rc geninfo_all_blocks=1 00:19:50.027 --rc geninfo_unexecuted_blocks=1 00:19:50.027 00:19:50.027 ' 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:50.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.027 --rc genhtml_branch_coverage=1 00:19:50.027 --rc genhtml_function_coverage=1 00:19:50.027 --rc genhtml_legend=1 00:19:50.027 --rc geninfo_all_blocks=1 00:19:50.027 --rc geninfo_unexecuted_blocks=1 00:19:50.027 00:19:50.027 ' 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:50.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.027 --rc genhtml_branch_coverage=1 00:19:50.027 --rc genhtml_function_coverage=1 00:19:50.027 --rc genhtml_legend=1 00:19:50.027 --rc geninfo_all_blocks=1 00:19:50.027 --rc geninfo_unexecuted_blocks=1 00:19:50.027 00:19:50.027 ' 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.027 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:50.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:50.028 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:50.287 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:50.287 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:50.287 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:50.287 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:50.287 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:50.287 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:50.287 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:50.287 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:50.287 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.287 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:50.287 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.287 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:50.287 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.287 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:50.287 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:50.287 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:50.287 Error setting digest 00:19:50.287 40829391097F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:50.287 40829391097F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:50.287 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:50.287 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:50.288 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:50.288 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:50.288 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:50.288 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:50.288 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.288 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:50.288 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:50.288 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:50.288 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.288 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.288 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.288 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:50.288 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:50.288 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:50.288 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:56.856 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:56.856 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:56.856 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:56.857 Found net devices under 0000:af:00.0: cvl_0_0 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:56.857 Found net devices under 0000:af:00.1: cvl_0_1 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:56.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:19:56.857 00:19:56.857 --- 10.0.0.2 ping statistics --- 00:19:56.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.857 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:56.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:19:56.857 00:19:56.857 --- 10.0.0.1 ping statistics --- 00:19:56.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.857 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=109219 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 109219 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 109219 ']' 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.857 09:58:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:57.116 [2024-12-11 09:58:06.490674] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:57.116 [2024-12-11 09:58:06.490714] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.116 [2024-12-11 09:58:06.574031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.116 [2024-12-11 09:58:06.611485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.116 [2024-12-11 09:58:06.611514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.116 [2024-12-11 09:58:06.611521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.116 [2024-12-11 09:58:06.611527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.116 [2024-12-11 09:58:06.611532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.116 [2024-12-11 09:58:06.612090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Bgz 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Bgz 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Bgz 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Bgz 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:58.049 [2024-12-11 09:58:07.510520] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.049 [2024-12-11 09:58:07.526526] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:58.049 [2024-12-11 09:58:07.526680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.049 malloc0 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=109466 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 109466 /var/tmp/bdevperf.sock 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 109466 ']' 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.049 09:58:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:58.307 [2024-12-11 09:58:07.660167] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:19:58.307 [2024-12-11 09:58:07.660215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109466 ] 00:19:58.307 [2024-12-11 09:58:07.741159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.307 [2024-12-11 09:58:07.782773] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.240 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.240 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:59.240 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Bgz 00:19:59.240 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:59.498 [2024-12-11 09:58:08.847459] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.498 TLSTESTn1 00:19:59.498 09:58:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:59.498 Running I/O for 10 seconds... 00:20:01.810 4746.00 IOPS, 18.54 MiB/s [2024-12-11T08:58:12.321Z] 5115.50 IOPS, 19.98 MiB/s [2024-12-11T08:58:13.258Z] 5255.33 IOPS, 20.53 MiB/s [2024-12-11T08:58:14.194Z] 5346.00 IOPS, 20.88 MiB/s [2024-12-11T08:58:15.131Z] 5354.00 IOPS, 20.91 MiB/s [2024-12-11T08:58:16.066Z] 5395.83 IOPS, 21.08 MiB/s [2024-12-11T08:58:17.442Z] 5403.29 IOPS, 21.11 MiB/s [2024-12-11T08:58:18.378Z] 5426.25 IOPS, 21.20 MiB/s [2024-12-11T08:58:19.314Z] 5442.22 IOPS, 21.26 MiB/s [2024-12-11T08:58:19.314Z] 5448.60 IOPS, 21.28 MiB/s 00:20:09.739 Latency(us) 00:20:09.739 [2024-12-11T08:58:19.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.739 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:09.739 Verification LBA range: start 0x0 length 0x2000 00:20:09.739 TLSTESTn1 : 10.01 5454.05 21.30 0.00 0.00 23434.52 5461.33 53677.10 00:20:09.739 [2024-12-11T08:58:19.314Z] =================================================================================================================== 00:20:09.739 [2024-12-11T08:58:19.314Z] Total : 5454.05 21.30 0.00 0.00 23434.52 5461.33 53677.10 00:20:09.739 { 00:20:09.739 "results": [ 00:20:09.739 { 00:20:09.739 "job": "TLSTESTn1", 00:20:09.739 "core_mask": "0x4", 00:20:09.739 "workload": "verify", 00:20:09.739 "status": "finished", 00:20:09.739 "verify_range": { 00:20:09.739 "start": 0, 00:20:09.739 "length": 8192 00:20:09.739 }, 00:20:09.739 "queue_depth": 128, 00:20:09.739 "io_size": 4096, 00:20:09.739 "runtime": 10.012927, 00:20:09.739 "iops": 5454.049550146526, 00:20:09.739 "mibps": 21.304881055259866, 00:20:09.739 "io_failed": 0, 00:20:09.739 "io_timeout": 0, 00:20:09.739 "avg_latency_us": 23434.516087688593, 00:20:09.739 "min_latency_us": 5461.333333333333, 00:20:09.739 "max_latency_us": 53677.10476190476 00:20:09.739 } 00:20:09.739 ], 00:20:09.739 "core_count": 1 00:20:09.740 } 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:09.740 nvmf_trace.0 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 109466 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 109466 ']' 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 109466 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109466 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109466' 00:20:09.740 killing process with pid 109466 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 109466 00:20:09.740 Received shutdown signal, test time was about 10.000000 seconds 00:20:09.740 00:20:09.740 Latency(us) 00:20:09.740 [2024-12-11T08:58:19.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.740 [2024-12-11T08:58:19.315Z] =================================================================================================================== 00:20:09.740 [2024-12-11T08:58:19.315Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.740 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 109466 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:09.999 rmmod nvme_tcp 00:20:09.999 rmmod nvme_fabrics 00:20:09.999 rmmod nvme_keyring 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 109219 ']' 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 109219 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 109219 ']' 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 109219 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109219 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109219' 00:20:09.999 killing process with pid 109219 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 109219 00:20:09.999 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 109219 00:20:10.258 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:10.258 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:10.258 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:10.258 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:10.258 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:10.258 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:10.258 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:10.258 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:10.258 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:10.258 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.258 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:10.258 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.162 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:12.162 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Bgz 00:20:12.162 00:20:12.162 real 0m22.411s 00:20:12.162 user 0m23.532s 00:20:12.162 sys 0m10.280s 00:20:12.162 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.162 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:12.162 ************************************ 00:20:12.162 END TEST nvmf_fips 00:20:12.162 ************************************ 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:12.421 ************************************ 00:20:12.421 START TEST nvmf_control_msg_list 00:20:12.421 ************************************ 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:12.421 * Looking for test storage... 00:20:12.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:12.421 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:12.422 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:12.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.422 --rc genhtml_branch_coverage=1 00:20:12.422 --rc genhtml_function_coverage=1 00:20:12.422 --rc genhtml_legend=1 00:20:12.422 --rc geninfo_all_blocks=1 00:20:12.422 --rc geninfo_unexecuted_blocks=1 00:20:12.422 00:20:12.422 ' 00:20:12.422 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:12.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.422 --rc genhtml_branch_coverage=1 00:20:12.422 --rc genhtml_function_coverage=1 00:20:12.422 --rc genhtml_legend=1 00:20:12.422 --rc geninfo_all_blocks=1 00:20:12.422 --rc geninfo_unexecuted_blocks=1 00:20:12.422 00:20:12.422 ' 00:20:12.422 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:12.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.422 --rc genhtml_branch_coverage=1 00:20:12.422 --rc genhtml_function_coverage=1 00:20:12.422 --rc genhtml_legend=1 00:20:12.422 --rc geninfo_all_blocks=1 00:20:12.422 --rc geninfo_unexecuted_blocks=1 00:20:12.422 00:20:12.422 ' 00:20:12.422 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:12.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.422 --rc genhtml_branch_coverage=1 00:20:12.422 --rc genhtml_function_coverage=1 00:20:12.422 --rc genhtml_legend=1 00:20:12.422 --rc geninfo_all_blocks=1 00:20:12.422 --rc geninfo_unexecuted_blocks=1 00:20:12.422 00:20:12.422 ' 00:20:12.422 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.422 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:12.422 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.422 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.422 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.422 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.422 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.422 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.422 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.422 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.422 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.422 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:12.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:12.681 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:12.682 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:12.682 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:19.248 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:19.248 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:19.248 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:19.248 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:19.248 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:19.248 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:19.248 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:19.248 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:19.248 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:19.248 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:19.248 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:19.248 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:19.248 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:19.248 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:19.248 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:19.248 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:19.248 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:19.248 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:19.249 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:19.249 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:19.249 Found net devices under 0000:af:00.0: cvl_0_0 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:19.249 Found net devices under 0000:af:00.1: cvl_0_1 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:19.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:20:19.249 00:20:19.249 --- 10.0.0.2 ping statistics --- 00:20:19.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.249 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:19.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:20:19.249 00:20:19.249 --- 10.0.0.1 ping statistics --- 00:20:19.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.249 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=115281 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:19.249 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 115281 00:20:19.250 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 115281 ']' 00:20:19.250 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.250 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.250 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.250 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.250 09:58:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:19.250 [2024-12-11 09:58:28.818888] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:20:19.250 [2024-12-11 09:58:28.818931] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.508 [2024-12-11 09:58:28.903334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.508 [2024-12-11 09:58:28.942675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.508 [2024-12-11 09:58:28.942710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.508 [2024-12-11 09:58:28.942717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.508 [2024-12-11 09:58:28.942722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.508 [2024-12-11 09:58:28.942727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.508 [2024-12-11 09:58:28.943270] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.508 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.508 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:19.508 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:19.508 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:19.508 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:19.508 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.508 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:19.508 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:19.508 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:19.508 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.508 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:19.508 [2024-12-11 09:58:29.079388] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:19.767 Malloc0 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:19.767 [2024-12-11 09:58:29.123667] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=115304 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=115305 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=115306 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:19.767 09:58:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 115304 00:20:19.767 [2024-12-11 09:58:29.182036] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:19.767 [2024-12-11 09:58:29.212152] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:19.767 [2024-12-11 09:58:29.212299] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:20.752 Initializing NVMe Controllers 00:20:20.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:20.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:20.752 Initialization complete. Launching workers. 00:20:20.752 ======================================================== 00:20:20.752 Latency(us) 00:20:20.752 Device Information : IOPS MiB/s Average min max 00:20:20.752 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41036.36 40724.75 42010.04 00:20:20.752 ======================================================== 00:20:20.752 Total : 25.00 0.10 41036.36 40724.75 42010.04 00:20:20.752 00:20:20.752 Initializing NVMe Controllers 00:20:20.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:20.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:20.752 Initialization complete. Launching workers. 00:20:20.752 ======================================================== 00:20:20.752 Latency(us) 00:20:20.752 Device Information : IOPS MiB/s Average min max 00:20:20.752 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5782.00 22.59 172.59 127.76 41097.32 00:20:20.752 ======================================================== 00:20:20.752 Total : 5782.00 22.59 172.59 127.76 41097.32 00:20:20.752 00:20:20.752 Initializing NVMe Controllers 00:20:20.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:20.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:20.752 Initialization complete. Launching workers. 00:20:20.752 ======================================================== 00:20:20.752 Latency(us) 00:20:20.752 Device Information : IOPS MiB/s Average min max 00:20:20.752 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 5901.00 23.05 169.10 127.91 478.88 00:20:20.752 ======================================================== 00:20:20.752 Total : 5901.00 23.05 169.10 127.91 478.88 00:20:20.752 00:20:20.752 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 115305 00:20:20.752 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 115306 00:20:20.752 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:20.752 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:21.054 rmmod nvme_tcp 00:20:21.054 rmmod nvme_fabrics 00:20:21.054 rmmod nvme_keyring 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 115281 ']' 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 115281 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 115281 ']' 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 115281 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115281 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115281' 00:20:21.054 killing process with pid 115281 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 115281 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 115281 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:21.054 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:21.055 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:21.055 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:21.055 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:21.055 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:21.055 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:21.055 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:21.055 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.055 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.055 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:23.588 00:20:23.588 real 0m10.875s 00:20:23.588 user 0m6.678s 00:20:23.588 sys 0m6.108s 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:23.588 ************************************ 00:20:23.588 END TEST nvmf_control_msg_list 00:20:23.588 ************************************ 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:23.588 ************************************ 00:20:23.588 START TEST nvmf_wait_for_buf 00:20:23.588 ************************************ 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:23.588 * Looking for test storage... 00:20:23.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:23.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.588 --rc genhtml_branch_coverage=1 00:20:23.588 --rc genhtml_function_coverage=1 00:20:23.588 --rc genhtml_legend=1 00:20:23.588 --rc geninfo_all_blocks=1 00:20:23.588 --rc geninfo_unexecuted_blocks=1 00:20:23.588 00:20:23.588 ' 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:23.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.588 --rc genhtml_branch_coverage=1 00:20:23.588 --rc genhtml_function_coverage=1 00:20:23.588 --rc genhtml_legend=1 00:20:23.588 --rc geninfo_all_blocks=1 00:20:23.588 --rc geninfo_unexecuted_blocks=1 00:20:23.588 00:20:23.588 ' 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:23.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.588 --rc genhtml_branch_coverage=1 00:20:23.588 --rc genhtml_function_coverage=1 00:20:23.588 --rc genhtml_legend=1 00:20:23.588 --rc geninfo_all_blocks=1 00:20:23.588 --rc geninfo_unexecuted_blocks=1 00:20:23.588 00:20:23.588 ' 00:20:23.588 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:23.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.588 --rc genhtml_branch_coverage=1 00:20:23.588 --rc genhtml_function_coverage=1 00:20:23.588 --rc genhtml_legend=1 00:20:23.588 --rc geninfo_all_blocks=1 00:20:23.588 --rc geninfo_unexecuted_blocks=1 00:20:23.588 00:20:23.588 ' 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:23.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:23.589 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:30.160 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:30.160 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:30.160 Found net devices under 0000:af:00.0: cvl_0_0 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:30.160 Found net devices under 0000:af:00.1: cvl_0_1 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:30.160 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:30.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:20:30.161 00:20:30.161 --- 10.0.0.2 ping statistics --- 00:20:30.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.161 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:30.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:20:30.161 00:20:30.161 --- 10.0.0.1 ping statistics --- 00:20:30.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.161 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=119363 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 119363 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 119363 ']' 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.161 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:30.420 [2024-12-11 09:58:39.771769] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:20:30.420 [2024-12-11 09:58:39.771817] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.420 [2024-12-11 09:58:39.859075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.420 [2024-12-11 09:58:39.900303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.420 [2024-12-11 09:58:39.900339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.420 [2024-12-11 09:58:39.900346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.420 [2024-12-11 09:58:39.900352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.420 [2024-12-11 09:58:39.900357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.420 [2024-12-11 09:58:39.900914] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.420 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.420 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:30.420 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:30.420 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:30.420 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:30.420 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.420 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:30.420 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:30.420 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:30.420 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.420 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:30.420 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.420 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:30.420 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.420 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:30.679 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.679 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:30.679 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.679 09:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:30.679 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.679 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:30.679 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.679 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:30.679 Malloc0 00:20:30.679 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.679 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:30.679 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.679 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:30.679 [2024-12-11 09:58:40.076856] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.679 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.679 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:30.680 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.680 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:30.680 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.680 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:30.680 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.680 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:30.680 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.680 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:30.680 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.680 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:30.680 [2024-12-11 09:58:40.101035] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.680 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.680 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:30.680 [2024-12-11 09:58:40.188311] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:32.057 Initializing NVMe Controllers 00:20:32.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:32.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:32.057 Initialization complete. Launching workers. 00:20:32.057 ======================================================== 00:20:32.057 Latency(us) 00:20:32.057 Device Information : IOPS MiB/s Average min max 00:20:32.057 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.93 15.99 32365.69 7253.37 63846.55 00:20:32.057 ======================================================== 00:20:32.057 Total : 127.93 15.99 32365.69 7253.37 63846.55 00:20:32.057 00:20:32.057 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:32.057 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:32.057 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.057 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:32.057 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.057 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:20:32.057 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:20:32.057 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:32.057 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:32.057 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:32.057 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:32.057 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:32.057 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:32.057 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:32.057 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:32.057 rmmod nvme_tcp 00:20:32.317 rmmod nvme_fabrics 00:20:32.317 rmmod nvme_keyring 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 119363 ']' 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 119363 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 119363 ']' 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 119363 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119363 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119363' 00:20:32.317 killing process with pid 119363 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 119363 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 119363 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:32.317 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.853 09:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:34.853 00:20:34.853 real 0m11.189s 00:20:34.853 user 0m4.120s 00:20:34.853 sys 0m5.541s 00:20:34.853 09:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:34.853 09:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:34.853 ************************************ 00:20:34.853 END TEST nvmf_wait_for_buf 00:20:34.853 ************************************ 00:20:34.853 09:58:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:34.853 09:58:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:34.853 09:58:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:34.853 09:58:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:34.853 09:58:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:34.853 09:58:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:41.426 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:41.426 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:41.426 Found net devices under 0000:af:00.0: cvl_0_0 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:41.426 Found net devices under 0000:af:00.1: cvl_0_1 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:41.426 ************************************ 00:20:41.426 START TEST nvmf_perf_adq 00:20:41.426 ************************************ 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:41.426 * Looking for test storage... 00:20:41.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:41.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.426 --rc genhtml_branch_coverage=1 00:20:41.426 --rc genhtml_function_coverage=1 00:20:41.426 --rc genhtml_legend=1 00:20:41.426 --rc geninfo_all_blocks=1 00:20:41.426 --rc geninfo_unexecuted_blocks=1 00:20:41.426 00:20:41.426 ' 00:20:41.426 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:41.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.426 --rc genhtml_branch_coverage=1 00:20:41.426 --rc genhtml_function_coverage=1 00:20:41.426 --rc genhtml_legend=1 00:20:41.426 --rc geninfo_all_blocks=1 00:20:41.426 --rc geninfo_unexecuted_blocks=1 00:20:41.426 00:20:41.427 ' 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:41.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.427 --rc genhtml_branch_coverage=1 00:20:41.427 --rc genhtml_function_coverage=1 00:20:41.427 --rc genhtml_legend=1 00:20:41.427 --rc geninfo_all_blocks=1 00:20:41.427 --rc geninfo_unexecuted_blocks=1 00:20:41.427 00:20:41.427 ' 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:41.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.427 --rc genhtml_branch_coverage=1 00:20:41.427 --rc genhtml_function_coverage=1 00:20:41.427 --rc genhtml_legend=1 00:20:41.427 --rc geninfo_all_blocks=1 00:20:41.427 --rc geninfo_unexecuted_blocks=1 00:20:41.427 00:20:41.427 ' 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:41.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:41.427 09:58:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.996 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:47.997 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:47.997 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:47.997 Found net devices under 0000:af:00.0: cvl_0_0 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:47.997 Found net devices under 0000:af:00.1: cvl_0_1 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:47.997 09:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:48.934 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:51.470 09:59:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.746 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:56.747 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:56.747 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:56.747 Found net devices under 0000:af:00.0: cvl_0_0 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:56.747 Found net devices under 0000:af:00.1: cvl_0_1 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:56.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.894 ms 00:20:56.747 00:20:56.747 --- 10.0.0.2 ping statistics --- 00:20:56.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.747 rtt min/avg/max/mdev = 0.894/0.894/0.894/0.000 ms 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:56.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:20:56.747 00:20:56.747 --- 10.0.0.1 ping statistics --- 00:20:56.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.747 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:56.747 09:59:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.747 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=128706 00:20:56.747 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 128706 00:20:56.747 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:56.747 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 128706 ']' 00:20:56.747 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.747 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.748 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.748 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.748 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.748 [2024-12-11 09:59:06.056147] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:20:56.748 [2024-12-11 09:59:06.056191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.748 [2024-12-11 09:59:06.141559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:56.748 [2024-12-11 09:59:06.182822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.748 [2024-12-11 09:59:06.182859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.748 [2024-12-11 09:59:06.182867] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.748 [2024-12-11 09:59:06.182873] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.748 [2024-12-11 09:59:06.182878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.748 [2024-12-11 09:59:06.184287] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.748 [2024-12-11 09:59:06.184393] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.748 [2024-12-11 09:59:06.184500] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.748 [2024-12-11 09:59:06.184501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.684 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.684 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.684 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:57.684 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.684 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.684 [2024-12-11 09:59:07.076310] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.684 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.684 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:57.684 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.684 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.684 Malloc1 00:20:57.684 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.684 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:57.685 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.685 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.685 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.685 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:57.685 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.685 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.685 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.685 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:57.685 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.685 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.685 [2024-12-11 09:59:07.133442] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.685 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.685 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=128936 00:20:57.685 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:57.685 09:59:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:59.589 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:59.589 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.589 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.848 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.848 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:59.848 "tick_rate": 2100000000, 00:20:59.848 "poll_groups": [ 00:20:59.848 { 00:20:59.848 "name": "nvmf_tgt_poll_group_000", 00:20:59.848 "admin_qpairs": 1, 00:20:59.848 "io_qpairs": 1, 00:20:59.848 "current_admin_qpairs": 1, 00:20:59.848 "current_io_qpairs": 1, 00:20:59.848 "pending_bdev_io": 0, 00:20:59.848 "completed_nvme_io": 20154, 00:20:59.848 "transports": [ 00:20:59.848 { 00:20:59.848 "trtype": "TCP" 00:20:59.848 } 00:20:59.848 ] 00:20:59.848 }, 00:20:59.848 { 00:20:59.848 "name": "nvmf_tgt_poll_group_001", 00:20:59.848 "admin_qpairs": 0, 00:20:59.848 "io_qpairs": 1, 00:20:59.848 "current_admin_qpairs": 0, 00:20:59.848 "current_io_qpairs": 1, 00:20:59.848 "pending_bdev_io": 0, 00:20:59.848 "completed_nvme_io": 20012, 00:20:59.848 "transports": [ 00:20:59.848 { 00:20:59.848 "trtype": "TCP" 00:20:59.848 } 00:20:59.848 ] 00:20:59.848 }, 00:20:59.848 { 00:20:59.848 "name": "nvmf_tgt_poll_group_002", 00:20:59.848 "admin_qpairs": 0, 00:20:59.848 "io_qpairs": 1, 00:20:59.848 "current_admin_qpairs": 0, 00:20:59.848 "current_io_qpairs": 1, 00:20:59.848 "pending_bdev_io": 0, 00:20:59.848 "completed_nvme_io": 20004, 00:20:59.848 "transports": [ 00:20:59.848 { 00:20:59.848 "trtype": "TCP" 00:20:59.848 } 00:20:59.848 ] 00:20:59.848 }, 00:20:59.848 { 00:20:59.848 "name": "nvmf_tgt_poll_group_003", 00:20:59.848 "admin_qpairs": 0, 00:20:59.848 "io_qpairs": 1, 00:20:59.848 "current_admin_qpairs": 0, 00:20:59.848 "current_io_qpairs": 1, 00:20:59.848 "pending_bdev_io": 0, 00:20:59.848 "completed_nvme_io": 19893, 00:20:59.848 "transports": [ 00:20:59.848 { 00:20:59.848 "trtype": "TCP" 00:20:59.848 } 00:20:59.848 ] 00:20:59.848 } 00:20:59.848 ] 00:20:59.848 }' 00:20:59.848 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:59.848 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:59.848 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:59.848 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:59.848 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 128936 00:21:08.045 Initializing NVMe Controllers 00:21:08.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:08.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:08.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:08.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:08.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:08.045 Initialization complete. Launching workers. 00:21:08.045 ======================================================== 00:21:08.045 Latency(us) 00:21:08.045 Device Information : IOPS MiB/s Average min max 00:21:08.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10348.60 40.42 6184.28 2132.38 10778.86 00:21:08.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10462.20 40.87 6118.02 2163.04 10803.03 00:21:08.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10452.00 40.83 6122.58 1864.27 10583.53 00:21:08.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10383.20 40.56 6163.05 1920.76 10947.70 00:21:08.045 ======================================================== 00:21:08.045 Total : 41646.00 162.68 6146.85 1864.27 10947.70 00:21:08.045 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:08.045 rmmod nvme_tcp 00:21:08.045 rmmod nvme_fabrics 00:21:08.045 rmmod nvme_keyring 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 128706 ']' 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 128706 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 128706 ']' 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 128706 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 128706 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:08.045 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 128706' 00:21:08.045 killing process with pid 128706 00:21:08.046 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 128706 00:21:08.046 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 128706 00:21:08.046 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:08.046 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:08.046 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:08.046 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:08.046 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:08.046 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:08.046 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:08.046 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:08.046 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:08.046 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.046 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.046 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.580 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:10.580 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:10.580 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:10.580 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:11.517 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:14.057 09:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:19.331 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:19.331 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:19.331 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.331 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:19.331 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:19.331 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:19.331 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.331 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.331 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:19.332 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:19.332 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:19.332 Found net devices under 0000:af:00.0: cvl_0_0 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:19.332 Found net devices under 0000:af:00.1: cvl_0_1 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:19.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=1.25 ms 00:21:19.332 00:21:19.332 --- 10.0.0.2 ping statistics --- 00:21:19.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.332 rtt min/avg/max/mdev = 1.250/1.250/1.250/0.000 ms 00:21:19.332 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:21:19.332 00:21:19.332 --- 10.0.0.1 ping statistics --- 00:21:19.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.333 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:19.333 net.core.busy_poll = 1 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:19.333 net.core.busy_read = 1 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:19.333 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:19.594 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:19.594 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:19.594 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:19.594 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.594 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=132815 00:21:19.594 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 132815 00:21:19.594 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:19.594 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 132815 ']' 00:21:19.594 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.595 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.595 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.595 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.595 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.595 [2024-12-11 09:59:28.988295] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:21:19.595 [2024-12-11 09:59:28.988339] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.595 [2024-12-11 09:59:29.070889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:19.595 [2024-12-11 09:59:29.111083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.595 [2024-12-11 09:59:29.111122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.595 [2024-12-11 09:59:29.111129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.595 [2024-12-11 09:59:29.111135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.595 [2024-12-11 09:59:29.111140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.595 [2024-12-11 09:59:29.112625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.595 [2024-12-11 09:59:29.112736] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.595 [2024-12-11 09:59:29.112842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.595 [2024-12-11 09:59:29.112843] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.532 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.532 [2024-12-11 09:59:30.012698] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.532 Malloc1 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.532 [2024-12-11 09:59:30.076796] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=133062 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:20.532 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:23.064 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:23.064 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.064 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.064 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.064 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:23.064 "tick_rate": 2100000000, 00:21:23.064 "poll_groups": [ 00:21:23.064 { 00:21:23.064 "name": "nvmf_tgt_poll_group_000", 00:21:23.064 "admin_qpairs": 1, 00:21:23.064 "io_qpairs": 1, 00:21:23.064 "current_admin_qpairs": 1, 00:21:23.064 "current_io_qpairs": 1, 00:21:23.064 "pending_bdev_io": 0, 00:21:23.064 "completed_nvme_io": 27388, 00:21:23.064 "transports": [ 00:21:23.064 { 00:21:23.064 "trtype": "TCP" 00:21:23.064 } 00:21:23.064 ] 00:21:23.064 }, 00:21:23.064 { 00:21:23.064 "name": "nvmf_tgt_poll_group_001", 00:21:23.064 "admin_qpairs": 0, 00:21:23.064 "io_qpairs": 3, 00:21:23.064 "current_admin_qpairs": 0, 00:21:23.064 "current_io_qpairs": 3, 00:21:23.064 "pending_bdev_io": 0, 00:21:23.064 "completed_nvme_io": 29284, 00:21:23.064 "transports": [ 00:21:23.064 { 00:21:23.064 "trtype": "TCP" 00:21:23.064 } 00:21:23.064 ] 00:21:23.064 }, 00:21:23.064 { 00:21:23.064 "name": "nvmf_tgt_poll_group_002", 00:21:23.064 "admin_qpairs": 0, 00:21:23.064 "io_qpairs": 0, 00:21:23.064 "current_admin_qpairs": 0, 00:21:23.064 "current_io_qpairs": 0, 00:21:23.064 "pending_bdev_io": 0, 00:21:23.064 "completed_nvme_io": 0, 00:21:23.064 "transports": [ 00:21:23.064 { 00:21:23.064 "trtype": "TCP" 00:21:23.064 } 00:21:23.064 ] 00:21:23.064 }, 00:21:23.064 { 00:21:23.064 "name": "nvmf_tgt_poll_group_003", 00:21:23.064 "admin_qpairs": 0, 00:21:23.064 "io_qpairs": 0, 00:21:23.064 "current_admin_qpairs": 0, 00:21:23.064 "current_io_qpairs": 0, 00:21:23.064 "pending_bdev_io": 0, 00:21:23.064 "completed_nvme_io": 0, 00:21:23.064 "transports": [ 00:21:23.064 { 00:21:23.064 "trtype": "TCP" 00:21:23.064 } 00:21:23.064 ] 00:21:23.064 } 00:21:23.064 ] 00:21:23.064 }' 00:21:23.064 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:23.064 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:23.064 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:23.064 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:23.064 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 133062 00:21:31.182 Initializing NVMe Controllers 00:21:31.182 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:31.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:31.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:31.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:31.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:31.182 Initialization complete. Launching workers. 00:21:31.182 ======================================================== 00:21:31.182 Latency(us) 00:21:31.182 Device Information : IOPS MiB/s Average min max 00:21:31.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5108.30 19.95 12535.44 1312.82 59651.98 00:21:31.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15292.10 59.73 4184.51 1720.56 45070.64 00:21:31.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4906.00 19.16 13043.68 1461.41 57811.55 00:21:31.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5305.70 20.73 12071.11 1369.91 58646.79 00:21:31.182 ======================================================== 00:21:31.182 Total : 30612.10 119.58 8364.75 1312.82 59651.98 00:21:31.182 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:31.182 rmmod nvme_tcp 00:21:31.182 rmmod nvme_fabrics 00:21:31.182 rmmod nvme_keyring 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 132815 ']' 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 132815 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 132815 ']' 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 132815 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132815 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132815' 00:21:31.182 killing process with pid 132815 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 132815 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 132815 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:31.182 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:31.183 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:31.183 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:31.183 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:31.183 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:31.183 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:31.183 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:31.183 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.183 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.183 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:34.472 00:21:34.472 real 0m53.215s 00:21:34.472 user 2m49.891s 00:21:34.472 sys 0m10.885s 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:34.472 ************************************ 00:21:34.472 END TEST nvmf_perf_adq 00:21:34.472 ************************************ 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:34.472 ************************************ 00:21:34.472 START TEST nvmf_shutdown 00:21:34.472 ************************************ 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:34.472 * Looking for test storage... 00:21:34.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:34.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.472 --rc genhtml_branch_coverage=1 00:21:34.472 --rc genhtml_function_coverage=1 00:21:34.472 --rc genhtml_legend=1 00:21:34.472 --rc geninfo_all_blocks=1 00:21:34.472 --rc geninfo_unexecuted_blocks=1 00:21:34.472 00:21:34.472 ' 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:34.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.472 --rc genhtml_branch_coverage=1 00:21:34.472 --rc genhtml_function_coverage=1 00:21:34.472 --rc genhtml_legend=1 00:21:34.472 --rc geninfo_all_blocks=1 00:21:34.472 --rc geninfo_unexecuted_blocks=1 00:21:34.472 00:21:34.472 ' 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:34.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.472 --rc genhtml_branch_coverage=1 00:21:34.472 --rc genhtml_function_coverage=1 00:21:34.472 --rc genhtml_legend=1 00:21:34.472 --rc geninfo_all_blocks=1 00:21:34.472 --rc geninfo_unexecuted_blocks=1 00:21:34.472 00:21:34.472 ' 00:21:34.472 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:34.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.472 --rc genhtml_branch_coverage=1 00:21:34.472 --rc genhtml_function_coverage=1 00:21:34.472 --rc genhtml_legend=1 00:21:34.472 --rc geninfo_all_blocks=1 00:21:34.472 --rc geninfo_unexecuted_blocks=1 00:21:34.472 00:21:34.472 ' 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:34.473 ************************************ 00:21:34.473 START TEST nvmf_shutdown_tc1 00:21:34.473 ************************************ 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:34.473 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:41.045 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:41.045 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:41.045 Found net devices under 0000:af:00.0: cvl_0_0 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.045 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:41.046 Found net devices under 0000:af:00.1: cvl_0_1 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.046 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:41.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:21:41.305 00:21:41.305 --- 10.0.0.2 ping statistics --- 00:21:41.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.305 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:21:41.305 00:21:41.305 --- 10.0.0.1 ping statistics --- 00:21:41.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.305 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=138754 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 138754 00:21:41.305 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 138754 ']' 00:21:41.306 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.306 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.306 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.306 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.306 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:41.306 [2024-12-11 09:59:50.823284] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:21:41.306 [2024-12-11 09:59:50.823329] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.564 [2024-12-11 09:59:50.890636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.564 [2024-12-11 09:59:50.931908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.564 [2024-12-11 09:59:50.931947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.564 [2024-12-11 09:59:50.931954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.564 [2024-12-11 09:59:50.931959] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.564 [2024-12-11 09:59:50.931965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.564 [2024-12-11 09:59:50.933436] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.564 [2024-12-11 09:59:50.933543] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.564 [2024-12-11 09:59:50.933649] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.564 [2024-12-11 09:59:50.933650] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:41.564 [2024-12-11 09:59:51.071072] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.564 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:41.823 Malloc1 00:21:41.823 [2024-12-11 09:59:51.185160] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.823 Malloc2 00:21:41.823 Malloc3 00:21:41.823 Malloc4 00:21:41.823 Malloc5 00:21:41.823 Malloc6 00:21:42.081 Malloc7 00:21:42.081 Malloc8 00:21:42.081 Malloc9 00:21:42.081 Malloc10 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=139021 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 139021 /var/tmp/bdevperf.sock 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 139021 ']' 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.081 { 00:21:42.081 "params": { 00:21:42.081 "name": "Nvme$subsystem", 00:21:42.081 "trtype": "$TEST_TRANSPORT", 00:21:42.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.081 "adrfam": "ipv4", 00:21:42.081 "trsvcid": "$NVMF_PORT", 00:21:42.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.081 "hdgst": ${hdgst:-false}, 00:21:42.081 "ddgst": ${ddgst:-false} 00:21:42.081 }, 00:21:42.081 "method": "bdev_nvme_attach_controller" 00:21:42.081 } 00:21:42.081 EOF 00:21:42.081 )") 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.081 { 00:21:42.081 "params": { 00:21:42.081 "name": "Nvme$subsystem", 00:21:42.081 "trtype": "$TEST_TRANSPORT", 00:21:42.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.081 "adrfam": "ipv4", 00:21:42.081 "trsvcid": "$NVMF_PORT", 00:21:42.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.081 "hdgst": ${hdgst:-false}, 00:21:42.081 "ddgst": ${ddgst:-false} 00:21:42.081 }, 00:21:42.081 "method": "bdev_nvme_attach_controller" 00:21:42.081 } 00:21:42.081 EOF 00:21:42.081 )") 00:21:42.081 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:42.082 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.082 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.082 { 00:21:42.082 "params": { 00:21:42.082 "name": "Nvme$subsystem", 00:21:42.082 "trtype": "$TEST_TRANSPORT", 00:21:42.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.082 "adrfam": "ipv4", 00:21:42.082 "trsvcid": "$NVMF_PORT", 00:21:42.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.082 "hdgst": ${hdgst:-false}, 00:21:42.082 "ddgst": ${ddgst:-false} 00:21:42.082 }, 00:21:42.082 "method": "bdev_nvme_attach_controller" 00:21:42.082 } 00:21:42.082 EOF 00:21:42.082 )") 00:21:42.082 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:42.082 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.082 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.082 { 00:21:42.082 "params": { 00:21:42.082 "name": "Nvme$subsystem", 00:21:42.082 "trtype": "$TEST_TRANSPORT", 00:21:42.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.082 "adrfam": "ipv4", 00:21:42.082 "trsvcid": "$NVMF_PORT", 00:21:42.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.082 "hdgst": ${hdgst:-false}, 00:21:42.082 "ddgst": ${ddgst:-false} 00:21:42.082 }, 00:21:42.082 "method": "bdev_nvme_attach_controller" 00:21:42.082 } 00:21:42.082 EOF 00:21:42.082 )") 00:21:42.082 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:42.082 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.082 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.082 { 00:21:42.082 "params": { 00:21:42.082 "name": "Nvme$subsystem", 00:21:42.082 "trtype": "$TEST_TRANSPORT", 00:21:42.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.082 "adrfam": "ipv4", 00:21:42.082 "trsvcid": "$NVMF_PORT", 00:21:42.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.082 "hdgst": ${hdgst:-false}, 00:21:42.082 "ddgst": ${ddgst:-false} 00:21:42.082 }, 00:21:42.082 "method": "bdev_nvme_attach_controller" 00:21:42.082 } 00:21:42.082 EOF 00:21:42.082 )") 00:21:42.082 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:42.340 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.340 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.340 { 00:21:42.340 "params": { 00:21:42.340 "name": "Nvme$subsystem", 00:21:42.340 "trtype": "$TEST_TRANSPORT", 00:21:42.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.340 "adrfam": "ipv4", 00:21:42.340 "trsvcid": "$NVMF_PORT", 00:21:42.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.340 "hdgst": ${hdgst:-false}, 00:21:42.340 "ddgst": ${ddgst:-false} 00:21:42.340 }, 00:21:42.340 "method": "bdev_nvme_attach_controller" 00:21:42.340 } 00:21:42.340 EOF 00:21:42.340 )") 00:21:42.340 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:42.340 [2024-12-11 09:59:51.664802] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:21:42.340 [2024-12-11 09:59:51.664850] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:42.340 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.340 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.340 { 00:21:42.340 "params": { 00:21:42.340 "name": "Nvme$subsystem", 00:21:42.340 "trtype": "$TEST_TRANSPORT", 00:21:42.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.340 "adrfam": "ipv4", 00:21:42.340 "trsvcid": "$NVMF_PORT", 00:21:42.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.340 "hdgst": ${hdgst:-false}, 00:21:42.340 "ddgst": ${ddgst:-false} 00:21:42.340 }, 00:21:42.340 "method": "bdev_nvme_attach_controller" 00:21:42.340 } 00:21:42.340 EOF 00:21:42.340 )") 00:21:42.340 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:42.340 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.340 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.340 { 00:21:42.340 "params": { 00:21:42.340 "name": "Nvme$subsystem", 00:21:42.340 "trtype": "$TEST_TRANSPORT", 00:21:42.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.340 "adrfam": "ipv4", 00:21:42.340 "trsvcid": "$NVMF_PORT", 00:21:42.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.340 "hdgst": ${hdgst:-false}, 00:21:42.340 "ddgst": ${ddgst:-false} 00:21:42.340 }, 00:21:42.340 "method": "bdev_nvme_attach_controller" 00:21:42.340 } 00:21:42.340 EOF 00:21:42.340 )") 00:21:42.340 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:42.340 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.340 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.340 { 00:21:42.340 "params": { 00:21:42.340 "name": "Nvme$subsystem", 00:21:42.340 "trtype": "$TEST_TRANSPORT", 00:21:42.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.340 "adrfam": "ipv4", 00:21:42.340 "trsvcid": "$NVMF_PORT", 00:21:42.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.340 "hdgst": ${hdgst:-false}, 00:21:42.340 "ddgst": ${ddgst:-false} 00:21:42.340 }, 00:21:42.340 "method": "bdev_nvme_attach_controller" 00:21:42.340 } 00:21:42.340 EOF 00:21:42.340 )") 00:21:42.340 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:42.340 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.340 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.340 { 00:21:42.340 "params": { 00:21:42.340 "name": "Nvme$subsystem", 00:21:42.340 "trtype": "$TEST_TRANSPORT", 00:21:42.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.340 "adrfam": "ipv4", 00:21:42.340 "trsvcid": "$NVMF_PORT", 00:21:42.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.340 "hdgst": ${hdgst:-false}, 00:21:42.340 "ddgst": ${ddgst:-false} 00:21:42.340 }, 00:21:42.340 "method": "bdev_nvme_attach_controller" 00:21:42.340 } 00:21:42.340 EOF 00:21:42.340 )") 00:21:42.341 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:42.341 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:42.341 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:42.341 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:42.341 "params": { 00:21:42.341 "name": "Nvme1", 00:21:42.341 "trtype": "tcp", 00:21:42.341 "traddr": "10.0.0.2", 00:21:42.341 "adrfam": "ipv4", 00:21:42.341 "trsvcid": "4420", 00:21:42.341 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.341 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:42.341 "hdgst": false, 00:21:42.341 "ddgst": false 00:21:42.341 }, 00:21:42.341 "method": "bdev_nvme_attach_controller" 00:21:42.341 },{ 00:21:42.341 "params": { 00:21:42.341 "name": "Nvme2", 00:21:42.341 "trtype": "tcp", 00:21:42.341 "traddr": "10.0.0.2", 00:21:42.341 "adrfam": "ipv4", 00:21:42.341 "trsvcid": "4420", 00:21:42.341 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:42.341 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:42.341 "hdgst": false, 00:21:42.341 "ddgst": false 00:21:42.341 }, 00:21:42.341 "method": "bdev_nvme_attach_controller" 00:21:42.341 },{ 00:21:42.341 "params": { 00:21:42.341 "name": "Nvme3", 00:21:42.341 "trtype": "tcp", 00:21:42.341 "traddr": "10.0.0.2", 00:21:42.341 "adrfam": "ipv4", 00:21:42.341 "trsvcid": "4420", 00:21:42.341 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:42.341 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:42.341 "hdgst": false, 00:21:42.341 "ddgst": false 00:21:42.341 }, 00:21:42.341 "method": "bdev_nvme_attach_controller" 00:21:42.341 },{ 00:21:42.341 "params": { 00:21:42.341 "name": "Nvme4", 00:21:42.341 "trtype": "tcp", 00:21:42.341 "traddr": "10.0.0.2", 00:21:42.341 "adrfam": "ipv4", 00:21:42.341 "trsvcid": "4420", 00:21:42.341 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:42.341 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:42.341 "hdgst": false, 00:21:42.341 "ddgst": false 00:21:42.341 }, 00:21:42.341 "method": "bdev_nvme_attach_controller" 00:21:42.341 },{ 00:21:42.341 "params": { 00:21:42.341 "name": "Nvme5", 00:21:42.341 "trtype": "tcp", 00:21:42.341 "traddr": "10.0.0.2", 00:21:42.341 "adrfam": "ipv4", 00:21:42.341 "trsvcid": "4420", 00:21:42.341 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:42.341 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:42.341 "hdgst": false, 00:21:42.341 "ddgst": false 00:21:42.341 }, 00:21:42.341 "method": "bdev_nvme_attach_controller" 00:21:42.341 },{ 00:21:42.341 "params": { 00:21:42.341 "name": "Nvme6", 00:21:42.341 "trtype": "tcp", 00:21:42.341 "traddr": "10.0.0.2", 00:21:42.341 "adrfam": "ipv4", 00:21:42.341 "trsvcid": "4420", 00:21:42.341 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:42.341 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:42.341 "hdgst": false, 00:21:42.341 "ddgst": false 00:21:42.341 }, 00:21:42.341 "method": "bdev_nvme_attach_controller" 00:21:42.341 },{ 00:21:42.341 "params": { 00:21:42.341 "name": "Nvme7", 00:21:42.341 "trtype": "tcp", 00:21:42.341 "traddr": "10.0.0.2", 00:21:42.341 "adrfam": "ipv4", 00:21:42.341 "trsvcid": "4420", 00:21:42.341 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:42.341 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:42.341 "hdgst": false, 00:21:42.341 "ddgst": false 00:21:42.341 }, 00:21:42.341 "method": "bdev_nvme_attach_controller" 00:21:42.341 },{ 00:21:42.341 "params": { 00:21:42.341 "name": "Nvme8", 00:21:42.341 "trtype": "tcp", 00:21:42.341 "traddr": "10.0.0.2", 00:21:42.341 "adrfam": "ipv4", 00:21:42.341 "trsvcid": "4420", 00:21:42.341 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:42.341 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:42.341 "hdgst": false, 00:21:42.341 "ddgst": false 00:21:42.341 }, 00:21:42.341 "method": "bdev_nvme_attach_controller" 00:21:42.341 },{ 00:21:42.341 "params": { 00:21:42.341 "name": "Nvme9", 00:21:42.341 "trtype": "tcp", 00:21:42.341 "traddr": "10.0.0.2", 00:21:42.341 "adrfam": "ipv4", 00:21:42.341 "trsvcid": "4420", 00:21:42.341 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:42.341 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:42.341 "hdgst": false, 00:21:42.341 "ddgst": false 00:21:42.341 }, 00:21:42.341 "method": "bdev_nvme_attach_controller" 00:21:42.341 },{ 00:21:42.341 "params": { 00:21:42.341 "name": "Nvme10", 00:21:42.341 "trtype": "tcp", 00:21:42.341 "traddr": "10.0.0.2", 00:21:42.341 "adrfam": "ipv4", 00:21:42.341 "trsvcid": "4420", 00:21:42.341 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:42.341 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:42.341 "hdgst": false, 00:21:42.341 "ddgst": false 00:21:42.341 }, 00:21:42.341 "method": "bdev_nvme_attach_controller" 00:21:42.341 }' 00:21:42.341 [2024-12-11 09:59:51.746740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.341 [2024-12-11 09:59:51.786222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.713 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.713 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:43.714 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:43.714 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.714 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:43.714 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.714 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 139021 00:21:43.714 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:43.714 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 139021 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:44.647 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 138754 00:21:44.647 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:44.647 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:44.647 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:44.648 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:44.648 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:44.648 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:44.648 { 00:21:44.648 "params": { 00:21:44.648 "name": "Nvme$subsystem", 00:21:44.648 "trtype": "$TEST_TRANSPORT", 00:21:44.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.648 "adrfam": "ipv4", 00:21:44.648 "trsvcid": "$NVMF_PORT", 00:21:44.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.648 "hdgst": ${hdgst:-false}, 00:21:44.648 "ddgst": ${ddgst:-false} 00:21:44.648 }, 00:21:44.648 "method": "bdev_nvme_attach_controller" 00:21:44.648 } 00:21:44.648 EOF 00:21:44.648 )") 00:21:44.648 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:44.648 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:44.648 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:44.648 { 00:21:44.648 "params": { 00:21:44.648 "name": "Nvme$subsystem", 00:21:44.648 "trtype": "$TEST_TRANSPORT", 00:21:44.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.648 "adrfam": "ipv4", 00:21:44.648 "trsvcid": "$NVMF_PORT", 00:21:44.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.648 "hdgst": ${hdgst:-false}, 00:21:44.648 "ddgst": ${ddgst:-false} 00:21:44.648 }, 00:21:44.648 "method": "bdev_nvme_attach_controller" 00:21:44.648 } 00:21:44.648 EOF 00:21:44.648 )") 00:21:44.648 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:44.648 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:44.648 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:44.648 { 00:21:44.648 "params": { 00:21:44.648 "name": "Nvme$subsystem", 00:21:44.648 "trtype": "$TEST_TRANSPORT", 00:21:44.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.648 "adrfam": "ipv4", 00:21:44.648 "trsvcid": "$NVMF_PORT", 00:21:44.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.648 "hdgst": ${hdgst:-false}, 00:21:44.648 "ddgst": ${ddgst:-false} 00:21:44.648 }, 00:21:44.648 "method": "bdev_nvme_attach_controller" 00:21:44.648 } 00:21:44.648 EOF 00:21:44.648 )") 00:21:44.648 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:44.648 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:44.648 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:44.648 { 00:21:44.648 "params": { 00:21:44.648 "name": "Nvme$subsystem", 00:21:44.648 "trtype": "$TEST_TRANSPORT", 00:21:44.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.648 "adrfam": "ipv4", 00:21:44.648 "trsvcid": "$NVMF_PORT", 00:21:44.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.648 "hdgst": ${hdgst:-false}, 00:21:44.648 "ddgst": ${ddgst:-false} 00:21:44.648 }, 00:21:44.648 "method": "bdev_nvme_attach_controller" 00:21:44.648 } 00:21:44.648 EOF 00:21:44.648 )") 00:21:44.648 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:44.648 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:44.648 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:44.648 { 00:21:44.648 "params": { 00:21:44.648 "name": "Nvme$subsystem", 00:21:44.648 "trtype": "$TEST_TRANSPORT", 00:21:44.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.648 "adrfam": "ipv4", 00:21:44.648 "trsvcid": "$NVMF_PORT", 00:21:44.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.648 "hdgst": ${hdgst:-false}, 00:21:44.648 "ddgst": ${ddgst:-false} 00:21:44.648 }, 00:21:44.648 "method": "bdev_nvme_attach_controller" 00:21:44.648 } 00:21:44.648 EOF 00:21:44.648 )") 00:21:44.648 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:44.906 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:44.906 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:44.906 { 00:21:44.906 "params": { 00:21:44.906 "name": "Nvme$subsystem", 00:21:44.906 "trtype": "$TEST_TRANSPORT", 00:21:44.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.906 "adrfam": "ipv4", 00:21:44.906 "trsvcid": "$NVMF_PORT", 00:21:44.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.906 "hdgst": ${hdgst:-false}, 00:21:44.906 "ddgst": ${ddgst:-false} 00:21:44.906 }, 00:21:44.906 "method": "bdev_nvme_attach_controller" 00:21:44.906 } 00:21:44.906 EOF 00:21:44.906 )") 00:21:44.906 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:44.906 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:44.906 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:44.906 { 00:21:44.907 "params": { 00:21:44.907 "name": "Nvme$subsystem", 00:21:44.907 "trtype": "$TEST_TRANSPORT", 00:21:44.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.907 "adrfam": "ipv4", 00:21:44.907 "trsvcid": "$NVMF_PORT", 00:21:44.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.907 "hdgst": ${hdgst:-false}, 00:21:44.907 "ddgst": ${ddgst:-false} 00:21:44.907 }, 00:21:44.907 "method": "bdev_nvme_attach_controller" 00:21:44.907 } 00:21:44.907 EOF 00:21:44.907 )") 00:21:44.907 [2024-12-11 09:59:54.231547] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:21:44.907 [2024-12-11 09:59:54.231598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139498 ] 00:21:44.907 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:44.907 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:44.907 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:44.907 { 00:21:44.907 "params": { 00:21:44.907 "name": "Nvme$subsystem", 00:21:44.907 "trtype": "$TEST_TRANSPORT", 00:21:44.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.907 "adrfam": "ipv4", 00:21:44.907 "trsvcid": "$NVMF_PORT", 00:21:44.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.907 "hdgst": ${hdgst:-false}, 00:21:44.907 "ddgst": ${ddgst:-false} 00:21:44.907 }, 00:21:44.907 "method": "bdev_nvme_attach_controller" 00:21:44.907 } 00:21:44.907 EOF 00:21:44.907 )") 00:21:44.907 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:44.907 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:44.907 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:44.907 { 00:21:44.907 "params": { 00:21:44.907 "name": "Nvme$subsystem", 00:21:44.907 "trtype": "$TEST_TRANSPORT", 00:21:44.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.907 "adrfam": "ipv4", 00:21:44.907 "trsvcid": "$NVMF_PORT", 00:21:44.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.907 "hdgst": ${hdgst:-false}, 00:21:44.907 "ddgst": ${ddgst:-false} 00:21:44.907 }, 00:21:44.907 "method": "bdev_nvme_attach_controller" 00:21:44.907 } 00:21:44.907 EOF 00:21:44.907 )") 00:21:44.907 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:44.907 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:44.907 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:44.907 { 00:21:44.907 "params": { 00:21:44.907 "name": "Nvme$subsystem", 00:21:44.907 "trtype": "$TEST_TRANSPORT", 00:21:44.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.907 "adrfam": "ipv4", 00:21:44.907 "trsvcid": "$NVMF_PORT", 00:21:44.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.907 "hdgst": ${hdgst:-false}, 00:21:44.907 "ddgst": ${ddgst:-false} 00:21:44.907 }, 00:21:44.907 "method": "bdev_nvme_attach_controller" 00:21:44.907 } 00:21:44.907 EOF 00:21:44.907 )") 00:21:44.907 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:44.907 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:44.907 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:44.907 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:44.907 "params": { 00:21:44.907 "name": "Nvme1", 00:21:44.907 "trtype": "tcp", 00:21:44.907 "traddr": "10.0.0.2", 00:21:44.907 "adrfam": "ipv4", 00:21:44.907 "trsvcid": "4420", 00:21:44.907 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.907 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:44.907 "hdgst": false, 00:21:44.907 "ddgst": false 00:21:44.907 }, 00:21:44.907 "method": "bdev_nvme_attach_controller" 00:21:44.907 },{ 00:21:44.907 "params": { 00:21:44.907 "name": "Nvme2", 00:21:44.907 "trtype": "tcp", 00:21:44.907 "traddr": "10.0.0.2", 00:21:44.907 "adrfam": "ipv4", 00:21:44.907 "trsvcid": "4420", 00:21:44.907 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:44.907 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:44.907 "hdgst": false, 00:21:44.907 "ddgst": false 00:21:44.907 }, 00:21:44.907 "method": "bdev_nvme_attach_controller" 00:21:44.907 },{ 00:21:44.907 "params": { 00:21:44.907 "name": "Nvme3", 00:21:44.907 "trtype": "tcp", 00:21:44.907 "traddr": "10.0.0.2", 00:21:44.907 "adrfam": "ipv4", 00:21:44.907 "trsvcid": "4420", 00:21:44.907 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:44.907 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:44.907 "hdgst": false, 00:21:44.907 "ddgst": false 00:21:44.907 }, 00:21:44.907 "method": "bdev_nvme_attach_controller" 00:21:44.907 },{ 00:21:44.907 "params": { 00:21:44.907 "name": "Nvme4", 00:21:44.907 "trtype": "tcp", 00:21:44.907 "traddr": "10.0.0.2", 00:21:44.907 "adrfam": "ipv4", 00:21:44.907 "trsvcid": "4420", 00:21:44.907 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:44.907 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:44.907 "hdgst": false, 00:21:44.907 "ddgst": false 00:21:44.907 }, 00:21:44.907 "method": "bdev_nvme_attach_controller" 00:21:44.907 },{ 00:21:44.907 "params": { 00:21:44.907 "name": "Nvme5", 00:21:44.907 "trtype": "tcp", 00:21:44.907 "traddr": "10.0.0.2", 00:21:44.907 "adrfam": "ipv4", 00:21:44.907 "trsvcid": "4420", 00:21:44.907 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:44.907 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:44.907 "hdgst": false, 00:21:44.907 "ddgst": false 00:21:44.907 }, 00:21:44.907 "method": "bdev_nvme_attach_controller" 00:21:44.907 },{ 00:21:44.907 "params": { 00:21:44.907 "name": "Nvme6", 00:21:44.907 "trtype": "tcp", 00:21:44.907 "traddr": "10.0.0.2", 00:21:44.907 "adrfam": "ipv4", 00:21:44.907 "trsvcid": "4420", 00:21:44.907 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:44.907 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:44.907 "hdgst": false, 00:21:44.907 "ddgst": false 00:21:44.907 }, 00:21:44.907 "method": "bdev_nvme_attach_controller" 00:21:44.907 },{ 00:21:44.907 "params": { 00:21:44.907 "name": "Nvme7", 00:21:44.907 "trtype": "tcp", 00:21:44.907 "traddr": "10.0.0.2", 00:21:44.907 "adrfam": "ipv4", 00:21:44.907 "trsvcid": "4420", 00:21:44.907 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:44.907 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:44.907 "hdgst": false, 00:21:44.907 "ddgst": false 00:21:44.907 }, 00:21:44.907 "method": "bdev_nvme_attach_controller" 00:21:44.907 },{ 00:21:44.907 "params": { 00:21:44.907 "name": "Nvme8", 00:21:44.907 "trtype": "tcp", 00:21:44.907 "traddr": "10.0.0.2", 00:21:44.907 "adrfam": "ipv4", 00:21:44.907 "trsvcid": "4420", 00:21:44.907 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:44.907 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:44.907 "hdgst": false, 00:21:44.907 "ddgst": false 00:21:44.907 }, 00:21:44.907 "method": "bdev_nvme_attach_controller" 00:21:44.907 },{ 00:21:44.907 "params": { 00:21:44.907 "name": "Nvme9", 00:21:44.907 "trtype": "tcp", 00:21:44.907 "traddr": "10.0.0.2", 00:21:44.907 "adrfam": "ipv4", 00:21:44.907 "trsvcid": "4420", 00:21:44.907 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:44.907 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:44.907 "hdgst": false, 00:21:44.907 "ddgst": false 00:21:44.907 }, 00:21:44.907 "method": "bdev_nvme_attach_controller" 00:21:44.907 },{ 00:21:44.907 "params": { 00:21:44.907 "name": "Nvme10", 00:21:44.907 "trtype": "tcp", 00:21:44.907 "traddr": "10.0.0.2", 00:21:44.907 "adrfam": "ipv4", 00:21:44.907 "trsvcid": "4420", 00:21:44.907 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:44.907 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:44.907 "hdgst": false, 00:21:44.907 "ddgst": false 00:21:44.907 }, 00:21:44.907 "method": "bdev_nvme_attach_controller" 00:21:44.907 }' 00:21:44.907 [2024-12-11 09:59:54.314982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.907 [2024-12-11 09:59:54.355771] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.280 Running I/O for 1 seconds... 00:21:47.214 2261.00 IOPS, 141.31 MiB/s 00:21:47.214 Latency(us) 00:21:47.214 [2024-12-11T08:59:56.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.214 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.214 Verification LBA range: start 0x0 length 0x400 00:21:47.214 Nvme1n1 : 1.16 276.69 17.29 0.00 0.00 229280.04 17725.93 211712.49 00:21:47.214 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.214 Verification LBA range: start 0x0 length 0x400 00:21:47.214 Nvme2n1 : 1.16 275.66 17.23 0.00 0.00 227056.98 15853.47 216705.71 00:21:47.214 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.214 Verification LBA range: start 0x0 length 0x400 00:21:47.214 Nvme3n1 : 1.11 296.87 18.55 0.00 0.00 202693.96 7396.21 201726.05 00:21:47.214 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.214 Verification LBA range: start 0x0 length 0x400 00:21:47.214 Nvme4n1 : 1.15 279.47 17.47 0.00 0.00 217604.00 12982.37 218702.99 00:21:47.214 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.214 Verification LBA range: start 0x0 length 0x400 00:21:47.214 Nvme5n1 : 1.17 274.50 17.16 0.00 0.00 218349.37 18474.91 224694.86 00:21:47.214 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.214 Verification LBA range: start 0x0 length 0x400 00:21:47.214 Nvme6n1 : 1.17 273.64 17.10 0.00 0.00 216272.70 19223.89 211712.49 00:21:47.214 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.214 Verification LBA range: start 0x0 length 0x400 00:21:47.214 Nvme7n1 : 1.15 281.30 17.58 0.00 0.00 206736.56 3448.44 211712.49 00:21:47.214 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.214 Verification LBA range: start 0x0 length 0x400 00:21:47.214 Nvme8n1 : 1.16 274.70 17.17 0.00 0.00 209298.77 13731.35 223696.21 00:21:47.214 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.214 Verification LBA range: start 0x0 length 0x400 00:21:47.214 Nvme9n1 : 1.17 278.17 17.39 0.00 0.00 203275.42 2605.84 215707.06 00:21:47.214 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.214 Verification LBA range: start 0x0 length 0x400 00:21:47.214 Nvme10n1 : 1.18 272.12 17.01 0.00 0.00 205439.17 15853.47 229688.08 00:21:47.214 [2024-12-11T08:59:56.789Z] =================================================================================================================== 00:21:47.214 [2024-12-11T08:59:56.789Z] Total : 2783.11 173.94 0.00 0.00 213535.69 2605.84 229688.08 00:21:47.473 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:47.473 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:47.473 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:47.473 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:47.473 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:47.473 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:47.473 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:47.473 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:47.473 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:47.473 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:47.473 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:47.473 rmmod nvme_tcp 00:21:47.473 rmmod nvme_fabrics 00:21:47.473 rmmod nvme_keyring 00:21:47.473 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:47.473 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:47.473 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:47.473 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 138754 ']' 00:21:47.473 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 138754 00:21:47.473 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 138754 ']' 00:21:47.473 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 138754 00:21:47.473 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:47.473 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.473 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 138754 00:21:47.731 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:47.731 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:47.731 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 138754' 00:21:47.731 killing process with pid 138754 00:21:47.731 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 138754 00:21:47.731 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 138754 00:21:47.990 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:47.990 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:47.990 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:47.990 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:47.990 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:47.990 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:47.990 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:47.990 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:47.990 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:47.990 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.990 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.991 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:50.527 00:21:50.527 real 0m15.523s 00:21:50.527 user 0m31.601s 00:21:50.527 sys 0m6.314s 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:50.527 ************************************ 00:21:50.527 END TEST nvmf_shutdown_tc1 00:21:50.527 ************************************ 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:50.527 ************************************ 00:21:50.527 START TEST nvmf_shutdown_tc2 00:21:50.527 ************************************ 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:50.527 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:50.528 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:50.528 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:50.528 Found net devices under 0000:af:00.0: cvl_0_0 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:50.528 Found net devices under 0000:af:00.1: cvl_0_1 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:50.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:21:50.528 00:21:50.528 --- 10.0.0.2 ping statistics --- 00:21:50.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.528 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:50.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:21:50.528 00:21:50.528 --- 10.0.0.1 ping statistics --- 00:21:50.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.528 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=140517 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 140517 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 140517 ']' 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.528 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:50.528 [2024-12-11 09:59:59.974009] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:21:50.528 [2024-12-11 09:59:59.974050] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.529 [2024-12-11 10:00:00.057148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:50.787 [2024-12-11 10:00:00.102772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.787 [2024-12-11 10:00:00.102809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.787 [2024-12-11 10:00:00.102817] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.787 [2024-12-11 10:00:00.102823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.787 [2024-12-11 10:00:00.102829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.787 [2024-12-11 10:00:00.104245] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.787 [2024-12-11 10:00:00.104345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:50.787 [2024-12-11 10:00:00.104457] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.787 [2024-12-11 10:00:00.104458] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:50.787 [2024-12-11 10:00:00.236698] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.787 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:50.787 Malloc1 00:21:50.787 [2024-12-11 10:00:00.339686] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.787 Malloc2 00:21:51.045 Malloc3 00:21:51.045 Malloc4 00:21:51.045 Malloc5 00:21:51.045 Malloc6 00:21:51.045 Malloc7 00:21:51.304 Malloc8 00:21:51.304 Malloc9 00:21:51.304 Malloc10 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=140635 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 140635 /var/tmp/bdevperf.sock 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 140635 ']' 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:51.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:51.304 { 00:21:51.304 "params": { 00:21:51.304 "name": "Nvme$subsystem", 00:21:51.304 "trtype": "$TEST_TRANSPORT", 00:21:51.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.304 "adrfam": "ipv4", 00:21:51.304 "trsvcid": "$NVMF_PORT", 00:21:51.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.304 "hdgst": ${hdgst:-false}, 00:21:51.304 "ddgst": ${ddgst:-false} 00:21:51.304 }, 00:21:51.304 "method": "bdev_nvme_attach_controller" 00:21:51.304 } 00:21:51.304 EOF 00:21:51.304 )") 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:51.304 { 00:21:51.304 "params": { 00:21:51.304 "name": "Nvme$subsystem", 00:21:51.304 "trtype": "$TEST_TRANSPORT", 00:21:51.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.304 "adrfam": "ipv4", 00:21:51.304 "trsvcid": "$NVMF_PORT", 00:21:51.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.304 "hdgst": ${hdgst:-false}, 00:21:51.304 "ddgst": ${ddgst:-false} 00:21:51.304 }, 00:21:51.304 "method": "bdev_nvme_attach_controller" 00:21:51.304 } 00:21:51.304 EOF 00:21:51.304 )") 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:51.304 { 00:21:51.304 "params": { 00:21:51.304 "name": "Nvme$subsystem", 00:21:51.304 "trtype": "$TEST_TRANSPORT", 00:21:51.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.304 "adrfam": "ipv4", 00:21:51.304 "trsvcid": "$NVMF_PORT", 00:21:51.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.304 "hdgst": ${hdgst:-false}, 00:21:51.304 "ddgst": ${ddgst:-false} 00:21:51.304 }, 00:21:51.304 "method": "bdev_nvme_attach_controller" 00:21:51.304 } 00:21:51.304 EOF 00:21:51.304 )") 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:51.304 { 00:21:51.304 "params": { 00:21:51.304 "name": "Nvme$subsystem", 00:21:51.304 "trtype": "$TEST_TRANSPORT", 00:21:51.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.304 "adrfam": "ipv4", 00:21:51.304 "trsvcid": "$NVMF_PORT", 00:21:51.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.304 "hdgst": ${hdgst:-false}, 00:21:51.304 "ddgst": ${ddgst:-false} 00:21:51.304 }, 00:21:51.304 "method": "bdev_nvme_attach_controller" 00:21:51.304 } 00:21:51.304 EOF 00:21:51.304 )") 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:51.304 { 00:21:51.304 "params": { 00:21:51.304 "name": "Nvme$subsystem", 00:21:51.304 "trtype": "$TEST_TRANSPORT", 00:21:51.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.304 "adrfam": "ipv4", 00:21:51.304 "trsvcid": "$NVMF_PORT", 00:21:51.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.304 "hdgst": ${hdgst:-false}, 00:21:51.304 "ddgst": ${ddgst:-false} 00:21:51.304 }, 00:21:51.304 "method": "bdev_nvme_attach_controller" 00:21:51.304 } 00:21:51.304 EOF 00:21:51.304 )") 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:51.304 { 00:21:51.304 "params": { 00:21:51.304 "name": "Nvme$subsystem", 00:21:51.304 "trtype": "$TEST_TRANSPORT", 00:21:51.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.304 "adrfam": "ipv4", 00:21:51.304 "trsvcid": "$NVMF_PORT", 00:21:51.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.304 "hdgst": ${hdgst:-false}, 00:21:51.304 "ddgst": ${ddgst:-false} 00:21:51.304 }, 00:21:51.304 "method": "bdev_nvme_attach_controller" 00:21:51.304 } 00:21:51.304 EOF 00:21:51.304 )") 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:51.304 { 00:21:51.304 "params": { 00:21:51.304 "name": "Nvme$subsystem", 00:21:51.304 "trtype": "$TEST_TRANSPORT", 00:21:51.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.304 "adrfam": "ipv4", 00:21:51.304 "trsvcid": "$NVMF_PORT", 00:21:51.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.304 "hdgst": ${hdgst:-false}, 00:21:51.304 "ddgst": ${ddgst:-false} 00:21:51.304 }, 00:21:51.304 "method": "bdev_nvme_attach_controller" 00:21:51.304 } 00:21:51.304 EOF 00:21:51.304 )") 00:21:51.304 [2024-12-11 10:00:00.811763] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:21:51.304 [2024-12-11 10:00:00.811816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140635 ] 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:51.304 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:51.304 { 00:21:51.304 "params": { 00:21:51.305 "name": "Nvme$subsystem", 00:21:51.305 "trtype": "$TEST_TRANSPORT", 00:21:51.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.305 "adrfam": "ipv4", 00:21:51.305 "trsvcid": "$NVMF_PORT", 00:21:51.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.305 "hdgst": ${hdgst:-false}, 00:21:51.305 "ddgst": ${ddgst:-false} 00:21:51.305 }, 00:21:51.305 "method": "bdev_nvme_attach_controller" 00:21:51.305 } 00:21:51.305 EOF 00:21:51.305 )") 00:21:51.305 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:51.305 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:51.305 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:51.305 { 00:21:51.305 "params": { 00:21:51.305 "name": "Nvme$subsystem", 00:21:51.305 "trtype": "$TEST_TRANSPORT", 00:21:51.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.305 "adrfam": "ipv4", 00:21:51.305 "trsvcid": "$NVMF_PORT", 00:21:51.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.305 "hdgst": ${hdgst:-false}, 00:21:51.305 "ddgst": ${ddgst:-false} 00:21:51.305 }, 00:21:51.305 "method": "bdev_nvme_attach_controller" 00:21:51.305 } 00:21:51.305 EOF 00:21:51.305 )") 00:21:51.305 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:51.305 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:51.305 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:51.305 { 00:21:51.305 "params": { 00:21:51.305 "name": "Nvme$subsystem", 00:21:51.305 "trtype": "$TEST_TRANSPORT", 00:21:51.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.305 "adrfam": "ipv4", 00:21:51.305 "trsvcid": "$NVMF_PORT", 00:21:51.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.305 "hdgst": ${hdgst:-false}, 00:21:51.305 "ddgst": ${ddgst:-false} 00:21:51.305 }, 00:21:51.305 "method": "bdev_nvme_attach_controller" 00:21:51.305 } 00:21:51.305 EOF 00:21:51.305 )") 00:21:51.305 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:51.305 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:51.305 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:51.305 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:51.305 "params": { 00:21:51.305 "name": "Nvme1", 00:21:51.305 "trtype": "tcp", 00:21:51.305 "traddr": "10.0.0.2", 00:21:51.305 "adrfam": "ipv4", 00:21:51.305 "trsvcid": "4420", 00:21:51.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:51.305 "hdgst": false, 00:21:51.305 "ddgst": false 00:21:51.305 }, 00:21:51.305 "method": "bdev_nvme_attach_controller" 00:21:51.305 },{ 00:21:51.305 "params": { 00:21:51.305 "name": "Nvme2", 00:21:51.305 "trtype": "tcp", 00:21:51.305 "traddr": "10.0.0.2", 00:21:51.305 "adrfam": "ipv4", 00:21:51.305 "trsvcid": "4420", 00:21:51.305 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:51.305 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:51.305 "hdgst": false, 00:21:51.305 "ddgst": false 00:21:51.305 }, 00:21:51.305 "method": "bdev_nvme_attach_controller" 00:21:51.305 },{ 00:21:51.305 "params": { 00:21:51.305 "name": "Nvme3", 00:21:51.305 "trtype": "tcp", 00:21:51.305 "traddr": "10.0.0.2", 00:21:51.305 "adrfam": "ipv4", 00:21:51.305 "trsvcid": "4420", 00:21:51.305 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:51.305 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:51.305 "hdgst": false, 00:21:51.305 "ddgst": false 00:21:51.305 }, 00:21:51.305 "method": "bdev_nvme_attach_controller" 00:21:51.305 },{ 00:21:51.305 "params": { 00:21:51.305 "name": "Nvme4", 00:21:51.305 "trtype": "tcp", 00:21:51.305 "traddr": "10.0.0.2", 00:21:51.305 "adrfam": "ipv4", 00:21:51.305 "trsvcid": "4420", 00:21:51.305 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:51.305 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:51.305 "hdgst": false, 00:21:51.305 "ddgst": false 00:21:51.305 }, 00:21:51.305 "method": "bdev_nvme_attach_controller" 00:21:51.305 },{ 00:21:51.305 "params": { 00:21:51.305 "name": "Nvme5", 00:21:51.305 "trtype": "tcp", 00:21:51.305 "traddr": "10.0.0.2", 00:21:51.305 "adrfam": "ipv4", 00:21:51.305 "trsvcid": "4420", 00:21:51.305 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:51.305 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:51.305 "hdgst": false, 00:21:51.305 "ddgst": false 00:21:51.305 }, 00:21:51.305 "method": "bdev_nvme_attach_controller" 00:21:51.305 },{ 00:21:51.305 "params": { 00:21:51.305 "name": "Nvme6", 00:21:51.305 "trtype": "tcp", 00:21:51.305 "traddr": "10.0.0.2", 00:21:51.305 "adrfam": "ipv4", 00:21:51.305 "trsvcid": "4420", 00:21:51.305 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:51.305 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:51.305 "hdgst": false, 00:21:51.305 "ddgst": false 00:21:51.305 }, 00:21:51.305 "method": "bdev_nvme_attach_controller" 00:21:51.305 },{ 00:21:51.305 "params": { 00:21:51.305 "name": "Nvme7", 00:21:51.305 "trtype": "tcp", 00:21:51.305 "traddr": "10.0.0.2", 00:21:51.305 "adrfam": "ipv4", 00:21:51.305 "trsvcid": "4420", 00:21:51.305 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:51.305 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:51.305 "hdgst": false, 00:21:51.305 "ddgst": false 00:21:51.305 }, 00:21:51.305 "method": "bdev_nvme_attach_controller" 00:21:51.305 },{ 00:21:51.305 "params": { 00:21:51.305 "name": "Nvme8", 00:21:51.305 "trtype": "tcp", 00:21:51.305 "traddr": "10.0.0.2", 00:21:51.305 "adrfam": "ipv4", 00:21:51.305 "trsvcid": "4420", 00:21:51.305 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:51.305 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:51.305 "hdgst": false, 00:21:51.305 "ddgst": false 00:21:51.305 }, 00:21:51.305 "method": "bdev_nvme_attach_controller" 00:21:51.305 },{ 00:21:51.305 "params": { 00:21:51.305 "name": "Nvme9", 00:21:51.305 "trtype": "tcp", 00:21:51.305 "traddr": "10.0.0.2", 00:21:51.305 "adrfam": "ipv4", 00:21:51.305 "trsvcid": "4420", 00:21:51.305 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:51.305 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:51.305 "hdgst": false, 00:21:51.305 "ddgst": false 00:21:51.305 }, 00:21:51.305 "method": "bdev_nvme_attach_controller" 00:21:51.305 },{ 00:21:51.305 "params": { 00:21:51.305 "name": "Nvme10", 00:21:51.305 "trtype": "tcp", 00:21:51.305 "traddr": "10.0.0.2", 00:21:51.305 "adrfam": "ipv4", 00:21:51.305 "trsvcid": "4420", 00:21:51.305 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:51.305 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:51.305 "hdgst": false, 00:21:51.305 "ddgst": false 00:21:51.305 }, 00:21:51.305 "method": "bdev_nvme_attach_controller" 00:21:51.305 }' 00:21:51.563 [2024-12-11 10:00:00.895875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.563 [2024-12-11 10:00:00.935656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.935 Running I/O for 10 seconds... 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=15 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 15 -ge 100 ']' 00:21:53.193 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:53.451 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:53.451 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:53.451 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:53.451 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:53.451 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.451 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:53.710 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.710 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:53.710 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:53.710 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:53.710 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:53.710 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:53.710 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 140635 00:21:53.710 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 140635 ']' 00:21:53.710 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 140635 00:21:53.710 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:53.710 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.710 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 140635 00:21:53.710 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:53.710 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:53.710 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 140635' 00:21:53.710 killing process with pid 140635 00:21:53.710 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 140635 00:21:53.710 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 140635 00:21:53.710 Received shutdown signal, test time was about 0.708019 seconds 00:21:53.710 00:21:53.710 Latency(us) 00:21:53.710 [2024-12-11T09:00:03.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.710 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.710 Verification LBA range: start 0x0 length 0x400 00:21:53.710 Nvme1n1 : 0.68 280.80 17.55 0.00 0.00 224124.34 15291.73 195734.19 00:21:53.710 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.710 Verification LBA range: start 0x0 length 0x400 00:21:53.710 Nvme2n1 : 0.68 287.66 17.98 0.00 0.00 212110.98 4681.14 189742.32 00:21:53.710 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.710 Verification LBA range: start 0x0 length 0x400 00:21:53.710 Nvme3n1 : 0.68 284.15 17.76 0.00 0.00 210904.67 26464.06 198730.12 00:21:53.710 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.710 Verification LBA range: start 0x0 length 0x400 00:21:53.710 Nvme4n1 : 0.67 285.74 17.86 0.00 0.00 204649.00 13918.60 215707.06 00:21:53.710 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.710 Verification LBA range: start 0x0 length 0x400 00:21:53.710 Nvme5n1 : 0.69 276.61 17.29 0.00 0.00 207266.70 21346.01 215707.06 00:21:53.710 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.710 Verification LBA range: start 0x0 length 0x400 00:21:53.710 Nvme6n1 : 0.70 274.91 17.18 0.00 0.00 203445.39 17101.78 215707.06 00:21:53.710 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.710 Verification LBA range: start 0x0 length 0x400 00:21:53.710 Nvme7n1 : 0.69 277.85 17.37 0.00 0.00 195913.31 15166.90 209715.20 00:21:53.710 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.710 Verification LBA range: start 0x0 length 0x400 00:21:53.710 Nvme8n1 : 0.70 273.90 17.12 0.00 0.00 194132.85 13981.01 217704.35 00:21:53.710 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.710 Verification LBA range: start 0x0 length 0x400 00:21:53.710 Nvme9n1 : 0.71 271.43 16.96 0.00 0.00 189697.79 17101.78 220700.28 00:21:53.710 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.710 Verification LBA range: start 0x0 length 0x400 00:21:53.710 Nvme10n1 : 0.71 272.17 17.01 0.00 0.00 184882.06 16103.13 234681.30 00:21:53.710 [2024-12-11T09:00:03.285Z] =================================================================================================================== 00:21:53.710 [2024-12-11T09:00:03.285Z] Total : 2785.22 174.08 0.00 0.00 202737.12 4681.14 234681.30 00:21:53.968 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:54.909 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 140517 00:21:54.909 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:54.909 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:54.909 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:54.909 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:54.909 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:54.909 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:54.909 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:54.909 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:54.910 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:54.910 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:54.910 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:54.910 rmmod nvme_tcp 00:21:54.910 rmmod nvme_fabrics 00:21:54.910 rmmod nvme_keyring 00:21:54.910 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:54.910 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:54.910 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:54.910 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 140517 ']' 00:21:54.910 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 140517 00:21:54.910 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 140517 ']' 00:21:54.910 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 140517 00:21:54.910 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:54.910 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.910 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 140517 00:21:55.168 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:55.168 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:55.168 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 140517' 00:21:55.168 killing process with pid 140517 00:21:55.168 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 140517 00:21:55.168 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 140517 00:21:55.427 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:55.427 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:55.427 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:55.427 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:55.427 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:55.427 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:55.427 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:55.427 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:55.427 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:55.427 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.427 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.427 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.421 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:57.421 00:21:57.421 real 0m7.349s 00:21:57.421 user 0m21.456s 00:21:57.421 sys 0m1.313s 00:21:57.421 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:57.421 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.421 ************************************ 00:21:57.421 END TEST nvmf_shutdown_tc2 00:21:57.421 ************************************ 00:21:57.421 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:57.421 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:57.421 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.421 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:57.681 ************************************ 00:21:57.681 START TEST nvmf_shutdown_tc3 00:21:57.681 ************************************ 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:57.681 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:57.681 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:57.681 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:57.682 Found net devices under 0000:af:00.0: cvl_0_0 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:57.682 Found net devices under 0000:af:00.1: cvl_0_1 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.682 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:57.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:21:57.941 00:21:57.941 --- 10.0.0.2 ping statistics --- 00:21:57.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.941 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:21:57.941 00:21:57.941 --- 10.0.0.1 ping statistics --- 00:21:57.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.941 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=141957 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 141957 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 141957 ']' 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.941 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:57.941 [2024-12-11 10:00:07.375043] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:21:57.941 [2024-12-11 10:00:07.375085] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.941 [2024-12-11 10:00:07.458836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.942 [2024-12-11 10:00:07.500800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.942 [2024-12-11 10:00:07.500834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.942 [2024-12-11 10:00:07.500842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.942 [2024-12-11 10:00:07.500849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.942 [2024-12-11 10:00:07.500854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.942 [2024-12-11 10:00:07.502461] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.942 [2024-12-11 10:00:07.502546] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.942 [2024-12-11 10:00:07.502630] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.942 [2024-12-11 10:00:07.502631] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:58.876 [2024-12-11 10:00:08.266380] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.876 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:58.876 Malloc1 00:21:58.876 [2024-12-11 10:00:08.376367] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.876 Malloc2 00:21:58.876 Malloc3 00:21:59.134 Malloc4 00:21:59.134 Malloc5 00:21:59.134 Malloc6 00:21:59.134 Malloc7 00:21:59.134 Malloc8 00:21:59.134 Malloc9 00:21:59.392 Malloc10 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=142401 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 142401 /var/tmp/bdevperf.sock 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 142401 ']' 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.392 { 00:21:59.392 "params": { 00:21:59.392 "name": "Nvme$subsystem", 00:21:59.392 "trtype": "$TEST_TRANSPORT", 00:21:59.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.392 "adrfam": "ipv4", 00:21:59.392 "trsvcid": "$NVMF_PORT", 00:21:59.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.392 "hdgst": ${hdgst:-false}, 00:21:59.392 "ddgst": ${ddgst:-false} 00:21:59.392 }, 00:21:59.392 "method": "bdev_nvme_attach_controller" 00:21:59.392 } 00:21:59.392 EOF 00:21:59.392 )") 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.392 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.392 { 00:21:59.392 "params": { 00:21:59.392 "name": "Nvme$subsystem", 00:21:59.392 "trtype": "$TEST_TRANSPORT", 00:21:59.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.392 "adrfam": "ipv4", 00:21:59.392 "trsvcid": "$NVMF_PORT", 00:21:59.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.393 "hdgst": ${hdgst:-false}, 00:21:59.393 "ddgst": ${ddgst:-false} 00:21:59.393 }, 00:21:59.393 "method": "bdev_nvme_attach_controller" 00:21:59.393 } 00:21:59.393 EOF 00:21:59.393 )") 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.393 { 00:21:59.393 "params": { 00:21:59.393 "name": "Nvme$subsystem", 00:21:59.393 "trtype": "$TEST_TRANSPORT", 00:21:59.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.393 "adrfam": "ipv4", 00:21:59.393 "trsvcid": "$NVMF_PORT", 00:21:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.393 "hdgst": ${hdgst:-false}, 00:21:59.393 "ddgst": ${ddgst:-false} 00:21:59.393 }, 00:21:59.393 "method": "bdev_nvme_attach_controller" 00:21:59.393 } 00:21:59.393 EOF 00:21:59.393 )") 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.393 { 00:21:59.393 "params": { 00:21:59.393 "name": "Nvme$subsystem", 00:21:59.393 "trtype": "$TEST_TRANSPORT", 00:21:59.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.393 "adrfam": "ipv4", 00:21:59.393 "trsvcid": "$NVMF_PORT", 00:21:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.393 "hdgst": ${hdgst:-false}, 00:21:59.393 "ddgst": ${ddgst:-false} 00:21:59.393 }, 00:21:59.393 "method": "bdev_nvme_attach_controller" 00:21:59.393 } 00:21:59.393 EOF 00:21:59.393 )") 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.393 { 00:21:59.393 "params": { 00:21:59.393 "name": "Nvme$subsystem", 00:21:59.393 "trtype": "$TEST_TRANSPORT", 00:21:59.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.393 "adrfam": "ipv4", 00:21:59.393 "trsvcid": "$NVMF_PORT", 00:21:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.393 "hdgst": ${hdgst:-false}, 00:21:59.393 "ddgst": ${ddgst:-false} 00:21:59.393 }, 00:21:59.393 "method": "bdev_nvme_attach_controller" 00:21:59.393 } 00:21:59.393 EOF 00:21:59.393 )") 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.393 { 00:21:59.393 "params": { 00:21:59.393 "name": "Nvme$subsystem", 00:21:59.393 "trtype": "$TEST_TRANSPORT", 00:21:59.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.393 "adrfam": "ipv4", 00:21:59.393 "trsvcid": "$NVMF_PORT", 00:21:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.393 "hdgst": ${hdgst:-false}, 00:21:59.393 "ddgst": ${ddgst:-false} 00:21:59.393 }, 00:21:59.393 "method": "bdev_nvme_attach_controller" 00:21:59.393 } 00:21:59.393 EOF 00:21:59.393 )") 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:59.393 [2024-12-11 10:00:08.849443] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:21:59.393 [2024-12-11 10:00:08.849497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142401 ] 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.393 { 00:21:59.393 "params": { 00:21:59.393 "name": "Nvme$subsystem", 00:21:59.393 "trtype": "$TEST_TRANSPORT", 00:21:59.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.393 "adrfam": "ipv4", 00:21:59.393 "trsvcid": "$NVMF_PORT", 00:21:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.393 "hdgst": ${hdgst:-false}, 00:21:59.393 "ddgst": ${ddgst:-false} 00:21:59.393 }, 00:21:59.393 "method": "bdev_nvme_attach_controller" 00:21:59.393 } 00:21:59.393 EOF 00:21:59.393 )") 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.393 { 00:21:59.393 "params": { 00:21:59.393 "name": "Nvme$subsystem", 00:21:59.393 "trtype": "$TEST_TRANSPORT", 00:21:59.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.393 "adrfam": "ipv4", 00:21:59.393 "trsvcid": "$NVMF_PORT", 00:21:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.393 "hdgst": ${hdgst:-false}, 00:21:59.393 "ddgst": ${ddgst:-false} 00:21:59.393 }, 00:21:59.393 "method": "bdev_nvme_attach_controller" 00:21:59.393 } 00:21:59.393 EOF 00:21:59.393 )") 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.393 { 00:21:59.393 "params": { 00:21:59.393 "name": "Nvme$subsystem", 00:21:59.393 "trtype": "$TEST_TRANSPORT", 00:21:59.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.393 "adrfam": "ipv4", 00:21:59.393 "trsvcid": "$NVMF_PORT", 00:21:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.393 "hdgst": ${hdgst:-false}, 00:21:59.393 "ddgst": ${ddgst:-false} 00:21:59.393 }, 00:21:59.393 "method": "bdev_nvme_attach_controller" 00:21:59.393 } 00:21:59.393 EOF 00:21:59.393 )") 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.393 { 00:21:59.393 "params": { 00:21:59.393 "name": "Nvme$subsystem", 00:21:59.393 "trtype": "$TEST_TRANSPORT", 00:21:59.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.393 "adrfam": "ipv4", 00:21:59.393 "trsvcid": "$NVMF_PORT", 00:21:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.393 "hdgst": ${hdgst:-false}, 00:21:59.393 "ddgst": ${ddgst:-false} 00:21:59.393 }, 00:21:59.393 "method": "bdev_nvme_attach_controller" 00:21:59.393 } 00:21:59.393 EOF 00:21:59.393 )") 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:59.393 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:59.393 "params": { 00:21:59.393 "name": "Nvme1", 00:21:59.393 "trtype": "tcp", 00:21:59.393 "traddr": "10.0.0.2", 00:21:59.393 "adrfam": "ipv4", 00:21:59.393 "trsvcid": "4420", 00:21:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.393 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:59.393 "hdgst": false, 00:21:59.393 "ddgst": false 00:21:59.393 }, 00:21:59.393 "method": "bdev_nvme_attach_controller" 00:21:59.393 },{ 00:21:59.393 "params": { 00:21:59.393 "name": "Nvme2", 00:21:59.393 "trtype": "tcp", 00:21:59.393 "traddr": "10.0.0.2", 00:21:59.393 "adrfam": "ipv4", 00:21:59.393 "trsvcid": "4420", 00:21:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:59.393 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:59.393 "hdgst": false, 00:21:59.393 "ddgst": false 00:21:59.393 }, 00:21:59.393 "method": "bdev_nvme_attach_controller" 00:21:59.393 },{ 00:21:59.393 "params": { 00:21:59.393 "name": "Nvme3", 00:21:59.393 "trtype": "tcp", 00:21:59.393 "traddr": "10.0.0.2", 00:21:59.393 "adrfam": "ipv4", 00:21:59.393 "trsvcid": "4420", 00:21:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:59.393 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:59.393 "hdgst": false, 00:21:59.393 "ddgst": false 00:21:59.393 }, 00:21:59.393 "method": "bdev_nvme_attach_controller" 00:21:59.393 },{ 00:21:59.393 "params": { 00:21:59.393 "name": "Nvme4", 00:21:59.393 "trtype": "tcp", 00:21:59.393 "traddr": "10.0.0.2", 00:21:59.393 "adrfam": "ipv4", 00:21:59.393 "trsvcid": "4420", 00:21:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:59.394 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:59.394 "hdgst": false, 00:21:59.394 "ddgst": false 00:21:59.394 }, 00:21:59.394 "method": "bdev_nvme_attach_controller" 00:21:59.394 },{ 00:21:59.394 "params": { 00:21:59.394 "name": "Nvme5", 00:21:59.394 "trtype": "tcp", 00:21:59.394 "traddr": "10.0.0.2", 00:21:59.394 "adrfam": "ipv4", 00:21:59.394 "trsvcid": "4420", 00:21:59.394 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:59.394 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:59.394 "hdgst": false, 00:21:59.394 "ddgst": false 00:21:59.394 }, 00:21:59.394 "method": "bdev_nvme_attach_controller" 00:21:59.394 },{ 00:21:59.394 "params": { 00:21:59.394 "name": "Nvme6", 00:21:59.394 "trtype": "tcp", 00:21:59.394 "traddr": "10.0.0.2", 00:21:59.394 "adrfam": "ipv4", 00:21:59.394 "trsvcid": "4420", 00:21:59.394 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:59.394 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:59.394 "hdgst": false, 00:21:59.394 "ddgst": false 00:21:59.394 }, 00:21:59.394 "method": "bdev_nvme_attach_controller" 00:21:59.394 },{ 00:21:59.394 "params": { 00:21:59.394 "name": "Nvme7", 00:21:59.394 "trtype": "tcp", 00:21:59.394 "traddr": "10.0.0.2", 00:21:59.394 "adrfam": "ipv4", 00:21:59.394 "trsvcid": "4420", 00:21:59.394 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:59.394 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:59.394 "hdgst": false, 00:21:59.394 "ddgst": false 00:21:59.394 }, 00:21:59.394 "method": "bdev_nvme_attach_controller" 00:21:59.394 },{ 00:21:59.394 "params": { 00:21:59.394 "name": "Nvme8", 00:21:59.394 "trtype": "tcp", 00:21:59.394 "traddr": "10.0.0.2", 00:21:59.394 "adrfam": "ipv4", 00:21:59.394 "trsvcid": "4420", 00:21:59.394 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:59.394 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:59.394 "hdgst": false, 00:21:59.394 "ddgst": false 00:21:59.394 }, 00:21:59.394 "method": "bdev_nvme_attach_controller" 00:21:59.394 },{ 00:21:59.394 "params": { 00:21:59.394 "name": "Nvme9", 00:21:59.394 "trtype": "tcp", 00:21:59.394 "traddr": "10.0.0.2", 00:21:59.394 "adrfam": "ipv4", 00:21:59.394 "trsvcid": "4420", 00:21:59.394 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:59.394 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:59.394 "hdgst": false, 00:21:59.394 "ddgst": false 00:21:59.394 }, 00:21:59.394 "method": "bdev_nvme_attach_controller" 00:21:59.394 },{ 00:21:59.394 "params": { 00:21:59.394 "name": "Nvme10", 00:21:59.394 "trtype": "tcp", 00:21:59.394 "traddr": "10.0.0.2", 00:21:59.394 "adrfam": "ipv4", 00:21:59.394 "trsvcid": "4420", 00:21:59.394 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:59.394 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:59.394 "hdgst": false, 00:21:59.394 "ddgst": false 00:21:59.394 }, 00:21:59.394 "method": "bdev_nvme_attach_controller" 00:21:59.394 }' 00:21:59.394 [2024-12-11 10:00:08.936543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.652 [2024-12-11 10:00:08.976782] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.023 Running I/O for 10 seconds... 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:01.281 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:01.540 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:01.540 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:01.540 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:01.540 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:01.540 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.540 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:01.540 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.815 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:01.815 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:01.815 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:01.815 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:01.815 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:01.815 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 141957 00:22:01.815 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 141957 ']' 00:22:01.815 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 141957 00:22:01.815 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:01.815 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.815 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 141957 00:22:01.815 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:01.815 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:01.815 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 141957' 00:22:01.815 killing process with pid 141957 00:22:01.815 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 141957 00:22:01.815 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 141957 00:22:01.815 [2024-12-11 10:00:11.189431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.815 [2024-12-11 10:00:11.189864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.189870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.189876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.189882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.189888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c66d0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.190994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.191196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7395b0 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.816 [2024-12-11 10:00:11.193619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.193888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7070 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.194998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.817 [2024-12-11 10:00:11.195111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7560 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.195996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.818 [2024-12-11 10:00:11.196343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c78e0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.197567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7db0 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.199411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.199427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.199433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.199439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.199445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.199451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.199457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.199462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.199468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.819 [2024-12-11 10:00:11.199473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8280 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.199910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.199920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.199928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.199936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.199942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.199950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.199956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.199963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe17810 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.199992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.200001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.200008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.200015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.200022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.200029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.200036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.200043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.200049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1281360 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.200075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.200084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.200091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.200098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.200105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.200112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.200119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.200125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.200135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124d8a0 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.200159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.200168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.200175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.200182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.200189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.200195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.200203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.200210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.200222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd38610 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.200259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.200268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.200275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.200282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.200289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.200295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.200302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.820 [2024-12-11 10:00:11.200308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.820 [2024-12-11 10:00:11.200315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12438d0 is same with the state(6) to be set 00:22:01.820 [2024-12-11 10:00:11.200337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.821 [2024-12-11 10:00:11.200346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.821 [2024-12-11 10:00:11.200353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.821 [2024-12-11 10:00:11.200359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.821 [2024-12-11 10:00:11.200366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.821 [2024-12-11 10:00:11.200373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.821 [2024-12-11 10:00:11.200383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.821 [2024-12-11 10:00:11.200389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.821 [2024-12-11 10:00:11.200395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe17610 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.821 [2024-12-11 10:00:11.200425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.821 [2024-12-11 10:00:11.200433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.821 [2024-12-11 10:00:11.200440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.821 [2024-12-11 10:00:11.200447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.821 [2024-12-11 10:00:11.200454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.821 [2024-12-11 10:00:11.200461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.821 [2024-12-11 10:00:11.200469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.821 [2024-12-11 10:00:11.200476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe23340 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.821 [2024-12-11 10:00:11.200507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.821 [2024-12-11 10:00:11.200517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.821 [2024-12-11 10:00:11.200523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-11 10:00:11.200520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.821 he state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.821 [2024-12-11 10:00:11.200535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.821 [2024-12-11 10:00:11.200544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.821 [2024-12-11 10:00:11.200551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.821 [2024-12-11 10:00:11.200559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe237d0 is same w[2024-12-11 10:00:11.200566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with tith the state(6) to be set 00:22:01.821 he state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1[2024-12-11 10:00:11.200859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.821 he state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.821 [2024-12-11 10:00:11.200873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.821 [2024-12-11 10:00:11.200880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.822 [2024-12-11 10:00:11.200886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:1[2024-12-11 10:00:11.200887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 he state(6) to be set 00:22:01.822 [2024-12-11 10:00:11.200896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with t[2024-12-11 10:00:11.200897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:01.822 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.200905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.822 [2024-12-11 10:00:11.200909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.200912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.822 [2024-12-11 10:00:11.200917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.200919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with the state(6) to be set 00:22:01.822 [2024-12-11 10:00:11.200926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1[2024-12-11 10:00:11.200926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c8770 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 he state(6) to be set 00:22:01.822 [2024-12-11 10:00:11.200935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.200944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.200952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.200960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.200967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.200975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.200981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.200990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.200996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.822 [2024-12-11 10:00:11.201462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.822 [2024-12-11 10:00:11.201469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with t[2024-12-11 10:00:11.201512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:01.823 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with t[2024-12-11 10:00:11.201522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128he state(6) to be set 00:22:01.823 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with t[2024-12-11 10:00:11.201532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:01.823 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with t[2024-12-11 10:00:11.201561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:12he state(6) to be set 00:22:01.823 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:12[2024-12-11 10:00:11.201584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 he state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-11 10:00:11.201593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 he state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.823 [2024-12-11 10:00:11.201689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.823 [2024-12-11 10:00:11.201874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.823 [2024-12-11 10:00:11.201901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:01.823 [2024-12-11 10:00:11.202170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.202914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.202959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.203007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.203053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.203101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.203146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.203199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.203251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.203301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.203347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.203395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.203440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.203493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.203538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.203586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.203630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.824 [2024-12-11 10:00:11.203677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.824 [2024-12-11 10:00:11.212511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.824 [2024-12-11 10:00:11.212522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.824 [2024-12-11 10:00:11.212531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.824 [2024-12-11 10:00:11.212539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.212806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7390e0 is same with the state(6) to be set 00:22:01.825 [2024-12-11 10:00:11.218407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.825 [2024-12-11 10:00:11.218883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.825 [2024-12-11 10:00:11.218894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.826 [2024-12-11 10:00:11.218904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.218915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.826 [2024-12-11 10:00:11.218924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.219205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe17810 (9): Bad file descriptor 00:22:01.826 [2024-12-11 10:00:11.219246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1281360 (9): Bad file descriptor 00:22:01.826 [2024-12-11 10:00:11.219262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124d8a0 (9): Bad file descriptor 00:22:01.826 [2024-12-11 10:00:11.219284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd38610 (9): Bad file descriptor 00:22:01.826 [2024-12-11 10:00:11.219316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.826 [2024-12-11 10:00:11.219328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.219339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.826 [2024-12-11 10:00:11.219349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.219362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.826 [2024-12-11 10:00:11.219371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.219381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.826 [2024-12-11 10:00:11.219390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.219399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1281140 is same with the state(6) to be set 00:22:01.826 [2024-12-11 10:00:11.219432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.826 [2024-12-11 10:00:11.219444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.219453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.826 [2024-12-11 10:00:11.219463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.219473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.826 [2024-12-11 10:00:11.219482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.219492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.826 [2024-12-11 10:00:11.219501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.219510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291080 is same with the state(6) to be set 00:22:01.826 [2024-12-11 10:00:11.219531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12438d0 (9): Bad file descriptor 00:22:01.826 [2024-12-11 10:00:11.219550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe17610 (9): Bad file descriptor 00:22:01.826 [2024-12-11 10:00:11.219570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe23340 (9): Bad file descriptor 00:22:01.826 [2024-12-11 10:00:11.219592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe237d0 (9): Bad file descriptor 00:22:01.826 [2024-12-11 10:00:11.222483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:01.826 [2024-12-11 10:00:11.222977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:01.826 [2024-12-11 10:00:11.223181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.826 [2024-12-11 10:00:11.223203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe17810 with addr=10.0.0.2, port=4420 00:22:01.826 [2024-12-11 10:00:11.223215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe17810 is same with the state(6) to be set 00:22:01.826 [2024-12-11 10:00:11.224377] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.826 [2024-12-11 10:00:11.224550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.826 [2024-12-11 10:00:11.224570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd38610 with addr=10.0.0.2, port=4420 00:22:01.826 [2024-12-11 10:00:11.224582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd38610 is same with the state(6) to be set 00:22:01.826 [2024-12-11 10:00:11.224601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe17810 (9): Bad file descriptor 00:22:01.826 [2024-12-11 10:00:11.224665] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.826 [2024-12-11 10:00:11.224730] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.826 [2024-12-11 10:00:11.224802] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.826 [2024-12-11 10:00:11.224853] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.826 [2024-12-11 10:00:11.224904] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.826 [2024-12-11 10:00:11.224966] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.826 [2024-12-11 10:00:11.225019] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.826 [2024-12-11 10:00:11.225049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd38610 (9): Bad file descriptor 00:22:01.826 [2024-12-11 10:00:11.225064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:01.826 [2024-12-11 10:00:11.225073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:01.826 [2024-12-11 10:00:11.225084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:01.826 [2024-12-11 10:00:11.225096] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:01.826 [2024-12-11 10:00:11.225159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.826 [2024-12-11 10:00:11.225173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.225191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.826 [2024-12-11 10:00:11.225201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.225213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.826 [2024-12-11 10:00:11.225231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.225242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.826 [2024-12-11 10:00:11.225252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.225264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.826 [2024-12-11 10:00:11.225273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.225284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.826 [2024-12-11 10:00:11.225294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.225305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.826 [2024-12-11 10:00:11.225314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.225325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.826 [2024-12-11 10:00:11.225339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.225350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.826 [2024-12-11 10:00:11.225360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.225371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.826 [2024-12-11 10:00:11.225380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.225391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.826 [2024-12-11 10:00:11.225400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.225412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.826 [2024-12-11 10:00:11.225421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.225432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.826 [2024-12-11 10:00:11.225441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.225452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.826 [2024-12-11 10:00:11.225461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.225472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.826 [2024-12-11 10:00:11.225481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.225492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.826 [2024-12-11 10:00:11.225501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.826 [2024-12-11 10:00:11.225513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.225989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.225999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.226010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.226019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.226031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.226040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.226051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.226059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.226071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.226080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.226091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.226100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.226111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.226123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.226134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.226143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.226154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.226163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.226174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.226183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.226195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.226204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.226216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.226230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.226241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.226251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.226262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.226271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.226282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.226291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.226302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.226311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.226323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.226332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.226343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.827 [2024-12-11 10:00:11.226352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.827 [2024-12-11 10:00:11.226363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.226372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.226385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.226394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.226406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.226415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.226426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.226435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.226446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.226455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.226466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.226475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.226486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.226495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.226505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1227a40 is same with the state(6) to be set 00:22:01.828 [2024-12-11 10:00:11.226664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:01.828 [2024-12-11 10:00:11.226677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:01.828 [2024-12-11 10:00:11.226687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:01.828 [2024-12-11 10:00:11.226700] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:01.828 [2024-12-11 10:00:11.227993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:01.828 [2024-12-11 10:00:11.228303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.828 [2024-12-11 10:00:11.228322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe17610 with addr=10.0.0.2, port=4420 00:22:01.828 [2024-12-11 10:00:11.228332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe17610 is same with the state(6) to be set 00:22:01.828 [2024-12-11 10:00:11.228623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe17610 (9): Bad file descriptor 00:22:01.828 [2024-12-11 10:00:11.228680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:01.828 [2024-12-11 10:00:11.228690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:01.828 [2024-12-11 10:00:11.228699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:01.828 [2024-12-11 10:00:11.228708] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:01.828 [2024-12-11 10:00:11.229227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1281140 (9): Bad file descriptor 00:22:01.828 [2024-12-11 10:00:11.229253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1291080 (9): Bad file descriptor 00:22:01.828 [2024-12-11 10:00:11.229379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.828 [2024-12-11 10:00:11.229763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.828 [2024-12-11 10:00:11.229773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.229780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.229789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.229797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.229806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.229815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.229825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.229832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.229842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.229849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.229858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.229866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.229876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.229884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.229893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.229901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.229910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.229918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.229927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.229935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.229944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.229952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.229961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.229969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.229979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.229986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.229996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.829 [2024-12-11 10:00:11.230465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.829 [2024-12-11 10:00:11.230472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.230482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.230490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.230498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027730 is same with the state(6) to be set 00:22:01.830 [2024-12-11 10:00:11.231634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.231988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.231996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.232005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.232013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.232022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.232031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.232041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.232049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.232058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.232066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.232075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.232082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.232092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.232099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.232109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.232116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.232125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.232133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.232142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.232150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.232160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.232168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.232177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.232185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.232194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.232202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.232211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.232225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.232235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.232243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.232255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.232263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.232272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.232280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.232289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.232297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.830 [2024-12-11 10:00:11.232307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.830 [2024-12-11 10:00:11.232314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.232742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.232751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11faa80 is same with the state(6) to be set 00:22:01.831 [2024-12-11 10:00:11.233894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.233907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.233919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.233927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.233937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.233944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.233954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.233961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.233972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.233979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.233989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.233996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.234005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.234013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.234022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.234030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.234040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.234053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.234062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.234070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.234079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.234087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.234097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.234104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.234114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.234122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.234131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.234139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.234148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.831 [2024-12-11 10:00:11.234156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.831 [2024-12-11 10:00:11.234165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.832 [2024-12-11 10:00:11.234774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.832 [2024-12-11 10:00:11.234782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.234791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.234799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.234808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.234816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.234825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.234833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.234842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.234849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.234859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.234866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.234875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.234883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.234893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.234901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.234910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.234919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.234929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.234936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.234946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.234953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.234962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.234970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.234979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.234987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.234996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.235004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.235012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1228d00 is same with the state(6) to be set 00:22:01.833 [2024-12-11 10:00:11.236141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.833 [2024-12-11 10:00:11.236617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.833 [2024-12-11 10:00:11.236626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.236983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.236993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.237001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.237010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.237018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.237027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.237036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.237045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.237053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.237062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.237070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.237080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.237087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.237097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.237104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.237114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.237122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.237132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.237141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.237151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.237158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.237168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.237176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.237185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.237193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.237202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.237209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.237222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.237230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.237240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.237247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.237257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.237266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.237275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122a080 is same with the state(6) to be set 00:22:01.834 [2024-12-11 10:00:11.238376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.238392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.238402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.238410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.238419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.834 [2024-12-11 10:00:11.238426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.834 [2024-12-11 10:00:11.238434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.238986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.238995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.239002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.239010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.239017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.239025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.239033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.835 [2024-12-11 10:00:11.239041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.835 [2024-12-11 10:00:11.239048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.239350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.239357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10682e0 is same with the state(6) to be set 00:22:01.836 [2024-12-11 10:00:11.240298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:01.836 [2024-12-11 10:00:11.240318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:01.836 [2024-12-11 10:00:11.240329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:01.836 [2024-12-11 10:00:11.240339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:01.836 [2024-12-11 10:00:11.240398] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:01.836 [2024-12-11 10:00:11.240475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:01.836 [2024-12-11 10:00:11.240719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.836 [2024-12-11 10:00:11.240741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe237d0 with addr=10.0.0.2, port=4420 00:22:01.836 [2024-12-11 10:00:11.240750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe237d0 is same with the state(6) to be set 00:22:01.836 [2024-12-11 10:00:11.240946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.836 [2024-12-11 10:00:11.240958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe23340 with addr=10.0.0.2, port=4420 00:22:01.836 [2024-12-11 10:00:11.240965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe23340 is same with the state(6) to be set 00:22:01.836 [2024-12-11 10:00:11.241227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.836 [2024-12-11 10:00:11.241240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d8a0 with addr=10.0.0.2, port=4420 00:22:01.836 [2024-12-11 10:00:11.241246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124d8a0 is same with the state(6) to be set 00:22:01.836 [2024-12-11 10:00:11.241411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.836 [2024-12-11 10:00:11.241422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12438d0 with addr=10.0.0.2, port=4420 00:22:01.836 [2024-12-11 10:00:11.241429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12438d0 is same with the state(6) to be set 00:22:01.836 [2024-12-11 10:00:11.242345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.242361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.242372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.242380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.242389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.242395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.242403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.242410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.242418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.242425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.242433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.242440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.242448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.242455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.242462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.242469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.242481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.836 [2024-12-11 10:00:11.242487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.836 [2024-12-11 10:00:11.242496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.242989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.242997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.243004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.243013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.243019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.243027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.243036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.243044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.243050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.243059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.243065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.837 [2024-12-11 10:00:11.243074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.837 [2024-12-11 10:00:11.243080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.243088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.243095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.243103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.243109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.243118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.243124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.243132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.243139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.243147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.243153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.243161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.243168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.243176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.243182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.243190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.243197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.243205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.243211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.243228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.243235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.243243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.243249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.243258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.243264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.243272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.243279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.243286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.243293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.243301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.243308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.243315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2172520 is same with the state(6) to be set 00:22:01.838 [2024-12-11 10:00:11.244296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.838 [2024-12-11 10:00:11.244674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.838 [2024-12-11 10:00:11.244682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.244991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.244998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.245006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.245012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.245021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.245027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.245036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.245042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.245050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.245057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.245065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.245072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.245080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.245086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.245094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.245101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.245109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.245116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.245124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.245130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.245140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.245146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.245155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.245161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.245169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.245176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.245184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.245190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.245199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.245205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.245214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.245225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.245234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.245240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.245248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.839 [2024-12-11 10:00:11.245255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.839 [2024-12-11 10:00:11.245263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1066fb0 is same with the state(6) to be set 00:22:01.839 [2024-12-11 10:00:11.246431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:01.839 [2024-12-11 10:00:11.246446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:01.839 [2024-12-11 10:00:11.246457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:01.839 [2024-12-11 10:00:11.246468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:01.840 task offset: 28416 on job bdev=Nvme2n1 fails 00:22:01.840 00:22:01.840 Latency(us) 00:22:01.840 [2024-12-11T09:00:11.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.840 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.840 Job: Nvme1n1 ended in about 0.81 seconds with error 00:22:01.840 Verification LBA range: start 0x0 length 0x400 00:22:01.840 Nvme1n1 : 0.81 238.33 14.90 79.44 0.00 198915.90 15791.06 213709.78 00:22:01.840 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.840 Job: Nvme2n1 ended in about 0.79 seconds with error 00:22:01.840 Verification LBA range: start 0x0 length 0x400 00:22:01.840 Nvme2n1 : 0.79 241.56 15.10 80.52 0.00 192322.56 15666.22 198730.12 00:22:01.840 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.840 Job: Nvme3n1 ended in about 0.81 seconds with error 00:22:01.840 Verification LBA range: start 0x0 length 0x400 00:22:01.840 Nvme3n1 : 0.81 237.66 14.85 79.22 0.00 191737.05 14667.58 211712.49 00:22:01.840 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.840 Job: Nvme4n1 ended in about 0.80 seconds with error 00:22:01.840 Verification LBA range: start 0x0 length 0x400 00:22:01.840 Nvme4n1 : 0.80 239.43 14.96 79.81 0.00 186341.91 3229.99 201726.05 00:22:01.840 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.840 Job: Nvme5n1 ended in about 0.81 seconds with error 00:22:01.840 Verification LBA range: start 0x0 length 0x400 00:22:01.840 Nvme5n1 : 0.81 158.00 9.87 79.00 0.00 246171.55 28711.01 202724.69 00:22:01.840 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.840 Job: Nvme6n1 ended in about 0.81 seconds with error 00:22:01.840 Verification LBA range: start 0x0 length 0x400 00:22:01.840 Nvme6n1 : 0.81 157.56 9.85 78.78 0.00 241807.69 16852.11 216705.71 00:22:01.840 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.840 Job: Nvme7n1 ended in about 0.80 seconds with error 00:22:01.840 Verification LBA range: start 0x0 length 0x400 00:22:01.840 Nvme7n1 : 0.80 241.13 15.07 80.38 0.00 173335.41 17101.78 215707.06 00:22:01.840 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.840 Job: Nvme8n1 ended in about 0.82 seconds with error 00:22:01.840 Verification LBA range: start 0x0 length 0x400 00:22:01.840 Nvme8n1 : 0.82 156.41 9.78 78.20 0.00 233434.78 13793.77 248662.31 00:22:01.840 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.840 Job: Nvme9n1 ended in about 0.82 seconds with error 00:22:01.840 Verification LBA range: start 0x0 length 0x400 00:22:01.840 Nvme9n1 : 0.82 156.04 9.75 78.02 0.00 228981.03 25340.59 219701.64 00:22:01.840 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.840 Job: Nvme10n1 ended in about 0.81 seconds with error 00:22:01.840 Verification LBA range: start 0x0 length 0x400 00:22:01.840 Nvme10n1 : 0.81 157.17 9.82 78.58 0.00 221946.64 17725.93 234681.30 00:22:01.840 [2024-12-11T09:00:11.415Z] =================================================================================================================== 00:22:01.840 [2024-12-11T09:00:11.415Z] Total : 1983.29 123.96 791.96 0.00 208218.18 3229.99 248662.31 00:22:01.840 [2024-12-11 10:00:11.277912] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:01.840 [2024-12-11 10:00:11.277960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:01.840 [2024-12-11 10:00:11.278318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.840 [2024-12-11 10:00:11.278336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1281360 with addr=10.0.0.2, port=4420 00:22:01.840 [2024-12-11 10:00:11.278347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1281360 is same with the state(6) to be set 00:22:01.840 [2024-12-11 10:00:11.278362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe237d0 (9): Bad file descriptor 00:22:01.840 [2024-12-11 10:00:11.278373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe23340 (9): Bad file descriptor 00:22:01.840 [2024-12-11 10:00:11.278382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124d8a0 (9): Bad file descriptor 00:22:01.840 [2024-12-11 10:00:11.278392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12438d0 (9): Bad file descriptor 00:22:01.840 [2024-12-11 10:00:11.278699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.840 [2024-12-11 10:00:11.278714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe17810 with addr=10.0.0.2, port=4420 00:22:01.840 [2024-12-11 10:00:11.278723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe17810 is same with the state(6) to be set 00:22:01.840 [2024-12-11 10:00:11.278870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.840 [2024-12-11 10:00:11.278882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd38610 with addr=10.0.0.2, port=4420 00:22:01.840 [2024-12-11 10:00:11.278889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd38610 is same with the state(6) to be set 00:22:01.840 [2024-12-11 10:00:11.279086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.840 [2024-12-11 10:00:11.279097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe17610 with addr=10.0.0.2, port=4420 00:22:01.840 [2024-12-11 10:00:11.279103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe17610 is same with the state(6) to be set 00:22:01.840 [2024-12-11 10:00:11.279299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.840 [2024-12-11 10:00:11.279310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1291080 with addr=10.0.0.2, port=4420 00:22:01.840 [2024-12-11 10:00:11.279318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291080 is same with the state(6) to be set 00:22:01.840 [2024-12-11 10:00:11.279529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.840 [2024-12-11 10:00:11.279541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1281140 with addr=10.0.0.2, port=4420 00:22:01.840 [2024-12-11 10:00:11.279548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1281140 is same with the state(6) to be set 00:22:01.840 [2024-12-11 10:00:11.279557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1281360 (9): Bad file descriptor 00:22:01.840 [2024-12-11 10:00:11.279565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:01.840 [2024-12-11 10:00:11.279572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:01.840 [2024-12-11 10:00:11.279580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:01.840 [2024-12-11 10:00:11.279589] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:01.840 [2024-12-11 10:00:11.279598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:01.840 [2024-12-11 10:00:11.279604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:01.840 [2024-12-11 10:00:11.279612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:01.840 [2024-12-11 10:00:11.279618] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:01.840 [2024-12-11 10:00:11.279625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:01.840 [2024-12-11 10:00:11.279631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:01.840 [2024-12-11 10:00:11.279637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:01.840 [2024-12-11 10:00:11.279643] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:01.840 [2024-12-11 10:00:11.279650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:01.840 [2024-12-11 10:00:11.279656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:01.840 [2024-12-11 10:00:11.279662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:01.840 [2024-12-11 10:00:11.279668] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:01.840 [2024-12-11 10:00:11.279717] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:01.840 [2024-12-11 10:00:11.280252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe17810 (9): Bad file descriptor 00:22:01.840 [2024-12-11 10:00:11.280266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd38610 (9): Bad file descriptor 00:22:01.840 [2024-12-11 10:00:11.280275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe17610 (9): Bad file descriptor 00:22:01.840 [2024-12-11 10:00:11.280283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1291080 (9): Bad file descriptor 00:22:01.840 [2024-12-11 10:00:11.280292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1281140 (9): Bad file descriptor 00:22:01.840 [2024-12-11 10:00:11.280299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:01.840 [2024-12-11 10:00:11.280305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:01.840 [2024-12-11 10:00:11.280312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:01.840 [2024-12-11 10:00:11.280318] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:01.840 [2024-12-11 10:00:11.280562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:01.840 [2024-12-11 10:00:11.280575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:01.840 [2024-12-11 10:00:11.280584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:01.840 [2024-12-11 10:00:11.280591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:01.840 [2024-12-11 10:00:11.280620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:01.840 [2024-12-11 10:00:11.280627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:01.840 [2024-12-11 10:00:11.280633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:01.840 [2024-12-11 10:00:11.280639] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:01.840 [2024-12-11 10:00:11.280646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:01.840 [2024-12-11 10:00:11.280652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:01.840 [2024-12-11 10:00:11.280659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:01.841 [2024-12-11 10:00:11.280665] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:01.841 [2024-12-11 10:00:11.280672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:01.841 [2024-12-11 10:00:11.280677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:01.841 [2024-12-11 10:00:11.280684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:01.841 [2024-12-11 10:00:11.280689] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:01.841 [2024-12-11 10:00:11.280696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:01.841 [2024-12-11 10:00:11.280702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:01.841 [2024-12-11 10:00:11.280711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:01.841 [2024-12-11 10:00:11.280717] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:01.841 [2024-12-11 10:00:11.280723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:01.841 [2024-12-11 10:00:11.280729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:01.841 [2024-12-11 10:00:11.280735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:01.841 [2024-12-11 10:00:11.280741] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:01.841 [2024-12-11 10:00:11.281007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.841 [2024-12-11 10:00:11.281021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12438d0 with addr=10.0.0.2, port=4420 00:22:01.841 [2024-12-11 10:00:11.281028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12438d0 is same with the state(6) to be set 00:22:01.841 [2024-12-11 10:00:11.281167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.841 [2024-12-11 10:00:11.281177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d8a0 with addr=10.0.0.2, port=4420 00:22:01.841 [2024-12-11 10:00:11.281184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124d8a0 is same with the state(6) to be set 00:22:01.841 [2024-12-11 10:00:11.281399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.841 [2024-12-11 10:00:11.281410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe23340 with addr=10.0.0.2, port=4420 00:22:01.841 [2024-12-11 10:00:11.281417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe23340 is same with the state(6) to be set 00:22:01.841 [2024-12-11 10:00:11.281558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.841 [2024-12-11 10:00:11.281568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe237d0 with addr=10.0.0.2, port=4420 00:22:01.841 [2024-12-11 10:00:11.281575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe237d0 is same with the state(6) to be set 00:22:01.841 [2024-12-11 10:00:11.281604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12438d0 (9): Bad file descriptor 00:22:01.841 [2024-12-11 10:00:11.281614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124d8a0 (9): Bad file descriptor 00:22:01.841 [2024-12-11 10:00:11.281623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe23340 (9): Bad file descriptor 00:22:01.841 [2024-12-11 10:00:11.281632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe237d0 (9): Bad file descriptor 00:22:01.841 [2024-12-11 10:00:11.281656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:01.841 [2024-12-11 10:00:11.281664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:01.841 [2024-12-11 10:00:11.281670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:01.841 [2024-12-11 10:00:11.281677] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:01.841 [2024-12-11 10:00:11.281684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:01.841 [2024-12-11 10:00:11.281690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:01.841 [2024-12-11 10:00:11.281695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:01.841 [2024-12-11 10:00:11.281701] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:01.841 [2024-12-11 10:00:11.281711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:01.841 [2024-12-11 10:00:11.281717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:01.841 [2024-12-11 10:00:11.281723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:01.841 [2024-12-11 10:00:11.281729] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:01.841 [2024-12-11 10:00:11.281736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:01.841 [2024-12-11 10:00:11.281741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:01.841 [2024-12-11 10:00:11.281748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:01.841 [2024-12-11 10:00:11.281753] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:02.100 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:03.037 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 142401 00:22:03.037 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:03.037 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 142401 00:22:03.037 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:03.037 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.037 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:03.037 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.037 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 142401 00:22:03.037 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:03.037 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:03.037 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:03.037 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:03.038 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:03.038 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:03.038 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:03.296 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:03.296 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:03.296 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:03.296 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:03.296 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:03.296 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:03.297 rmmod nvme_tcp 00:22:03.297 rmmod nvme_fabrics 00:22:03.297 rmmod nvme_keyring 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 141957 ']' 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 141957 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 141957 ']' 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 141957 00:22:03.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (141957) - No such process 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 141957 is not found' 00:22:03.297 Process with pid 141957 is not found 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.297 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.202 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:05.461 00:22:05.461 real 0m7.780s 00:22:05.461 user 0m18.987s 00:22:05.461 sys 0m1.343s 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:05.462 ************************************ 00:22:05.462 END TEST nvmf_shutdown_tc3 00:22:05.462 ************************************ 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:05.462 ************************************ 00:22:05.462 START TEST nvmf_shutdown_tc4 00:22:05.462 ************************************ 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:05.462 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:05.462 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:05.462 Found net devices under 0000:af:00.0: cvl_0_0 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:05.462 Found net devices under 0000:af:00.1: cvl_0_1 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:05.462 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:05.463 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:05.463 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.463 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.463 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.463 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:05.463 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.463 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.463 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:05.463 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:05.463 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.463 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.463 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:05.463 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:05.463 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.463 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:05.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:22:05.721 00:22:05.721 --- 10.0.0.2 ping statistics --- 00:22:05.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.721 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:22:05.721 00:22:05.721 --- 10.0.0.1 ping statistics --- 00:22:05.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.721 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=143798 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 143798 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 143798 ']' 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.721 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:05.721 [2024-12-11 10:00:15.268184] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:22:05.721 [2024-12-11 10:00:15.268241] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.979 [2024-12-11 10:00:15.353010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.979 [2024-12-11 10:00:15.392391] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.979 [2024-12-11 10:00:15.392429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.979 [2024-12-11 10:00:15.392437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.979 [2024-12-11 10:00:15.392444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.979 [2024-12-11 10:00:15.392449] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.979 [2024-12-11 10:00:15.393895] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.979 [2024-12-11 10:00:15.394005] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.979 [2024-12-11 10:00:15.394113] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.979 [2024-12-11 10:00:15.394115] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:22:06.544 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.544 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:06.544 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:06.544 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:06.544 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:06.802 [2024-12-11 10:00:16.152301] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.802 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:06.802 Malloc1 00:22:06.802 [2024-12-11 10:00:16.259347] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.802 Malloc2 00:22:06.802 Malloc3 00:22:06.802 Malloc4 00:22:07.060 Malloc5 00:22:07.060 Malloc6 00:22:07.060 Malloc7 00:22:07.060 Malloc8 00:22:07.060 Malloc9 00:22:07.060 Malloc10 00:22:07.318 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.318 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:07.318 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:07.318 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:07.318 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=144110 00:22:07.318 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:07.318 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:07.318 [2024-12-11 10:00:16.765317] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:12.591 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:12.591 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 143798 00:22:12.591 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 143798 ']' 00:22:12.591 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 143798 00:22:12.591 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:12.591 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.591 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 143798 00:22:12.591 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:12.591 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:12.591 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 143798' 00:22:12.591 killing process with pid 143798 00:22:12.591 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 143798 00:22:12.591 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 143798 00:22:12.591 [2024-12-11 10:00:21.765094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a68a0 is same with the state(6) to be set 00:22:12.591 [2024-12-11 10:00:21.765142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a68a0 is same with the state(6) to be set 00:22:12.591 [2024-12-11 10:00:21.765150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a68a0 is same with the state(6) to be set 00:22:12.591 [2024-12-11 10:00:21.765157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a68a0 is same with the state(6) to be set 00:22:12.591 [2024-12-11 10:00:21.765164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a68a0 is same with the state(6) to be set 00:22:12.591 [2024-12-11 10:00:21.765171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a68a0 is same with the state(6) to be set 00:22:12.591 [2024-12-11 10:00:21.765177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a68a0 is same with the state(6) to be set 00:22:12.591 [2024-12-11 10:00:21.765183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a68a0 is same with the state(6) to be set 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 starting I/O failed: -6 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 starting I/O failed: -6 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 starting I/O failed: -6 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 starting I/O failed: -6 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 starting I/O failed: -6 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 starting I/O failed: -6 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 [2024-12-11 10:00:21.765799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6d70 is same with starting I/O failed: -6 00:22:12.591 the state(6) to be set 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 [2024-12-11 10:00:21.765832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6d70 is same with the state(6) to be set 00:22:12.591 [2024-12-11 10:00:21.765840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6d70 is same with the state(6) to be set 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 [2024-12-11 10:00:21.765847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6d70 is same with the state(6) to be set 00:22:12.591 [2024-12-11 10:00:21.765853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6d70 is same with the state(6) to be set 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 [2024-12-11 10:00:21.765859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6d70 is same with the state(6) to be set 00:22:12.591 [2024-12-11 10:00:21.765866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6d70 is same with the state(6) to be set 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 [2024-12-11 10:00:21.765872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6d70 is same with the state(6) to be set 00:22:12.591 starting I/O failed: -6 00:22:12.591 [2024-12-11 10:00:21.765878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6d70 is same with the state(6) to be set 00:22:12.591 [2024-12-11 10:00:21.765885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6d70 is same with the state(6) to be set 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 starting I/O failed: -6 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 [2024-12-11 10:00:21.766012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:12.591 starting I/O failed: -6 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 starting I/O failed: -6 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 starting I/O failed: -6 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 starting I/O failed: -6 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 starting I/O failed: -6 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 starting I/O failed: -6 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 starting I/O failed: -6 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 starting I/O failed: -6 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 starting I/O failed: -6 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 starting I/O failed: -6 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.591 starting I/O failed: -6 00:22:12.591 Write completed with error (sct=0, sc=8) 00:22:12.592 [2024-12-11 10:00:21.766528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a7240 is same with the state(6) to be set 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 [2024-12-11 10:00:21.766552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a7240 is same with the state(6) to be set 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 [2024-12-11 10:00:21.766561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a7240 is same with starting I/O failed: -6 00:22:12.592 the state(6) to be set 00:22:12.592 [2024-12-11 10:00:21.766568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a7240 is same with the state(6) to be set 00:22:12.592 [2024-12-11 10:00:21.766575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a7240 is same with the state(6) to be set 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 [2024-12-11 10:00:21.766581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a7240 is same with the state(6) to be set 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 [2024-12-11 10:00:21.766950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 [2024-12-11 10:00:21.767958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.592 Write completed with error (sct=0, sc=8) 00:22:12.592 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 [2024-12-11 10:00:21.769618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:12.593 NVMe io qpair process completion error 00:22:12.593 [2024-12-11 10:00:21.769877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8f20 is same with the state(6) to be set 00:22:12.593 [2024-12-11 10:00:21.769899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8f20 is same with the state(6) to be set 00:22:12.593 [2024-12-11 10:00:21.769905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8f20 is same with the state(6) to be set 00:22:12.593 [2024-12-11 10:00:21.769912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8f20 is same with the state(6) to be set 00:22:12.593 [2024-12-11 10:00:21.769918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8f20 is same with the state(6) to be set 00:22:12.593 [2024-12-11 10:00:21.769925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8f20 is same with the state(6) to be set 00:22:12.593 [2024-12-11 10:00:21.769931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8f20 is same with the state(6) to be set 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 [2024-12-11 10:00:21.770240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a93f0 is same with the state(6) to be set 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 [2024-12-11 10:00:21.770258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a93f0 is same with the state(6) to be set 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 [2024-12-11 10:00:21.770265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a93f0 is same with the state(6) to be set 00:22:12.593 [2024-12-11 10:00:21.770272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a93f0 is same with the state(6) to be set 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 [2024-12-11 10:00:21.770278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a93f0 is same with starting I/O failed: -6 00:22:12.593 the state(6) to be set 00:22:12.593 [2024-12-11 10:00:21.770290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a93f0 is same with the state(6) to be set 00:22:12.593 [2024-12-11 10:00:21.770297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a93f0 is same with the state(6) to be set 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 [2024-12-11 10:00:21.770303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a93f0 is same with the state(6) to be set 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 [2024-12-11 10:00:21.770637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 [2024-12-11 10:00:21.770961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a98c0 is same with starting I/O failed: -6 00:22:12.593 the state(6) to be set 00:22:12.593 [2024-12-11 10:00:21.770983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a98c0 is same with Write completed with error (sct=0, sc=8) 00:22:12.593 the state(6) to be set 00:22:12.593 [2024-12-11 10:00:21.770990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a98c0 is same with the state(6) to be set 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 [2024-12-11 10:00:21.770997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a98c0 is same with the state(6) to be set 00:22:12.593 [2024-12-11 10:00:21.771005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a98c0 is same with the state(6) to be set 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 [2024-12-11 10:00:21.771011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a98c0 is same with the state(6) to be set 00:22:12.593 starting I/O failed: -6 00:22:12.593 [2024-12-11 10:00:21.771018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a98c0 is same with the state(6) to be set 00:22:12.593 [2024-12-11 10:00:21.771024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a98c0 is same with the state(6) to be set 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 [2024-12-11 10:00:21.771283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8a50 is same with the state(6) to be set 00:22:12.593 [2024-12-11 10:00:21.771305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8a50 is same with Write completed with error (sct=0, sc=8) 00:22:12.593 the state(6) to be set 00:22:12.593 starting I/O failed: -6 00:22:12.593 [2024-12-11 10:00:21.771314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8a50 is same with the state(6) to be set 00:22:12.593 [2024-12-11 10:00:21.771322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8a50 is same with the state(6) to be set 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 [2024-12-11 10:00:21.771328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8a50 is same with the state(6) to be set 00:22:12.593 [2024-12-11 10:00:21.771335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8a50 is same with the state(6) to be set 00:22:12.593 [2024-12-11 10:00:21.771341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8a50 is same with Write completed with error (sct=0, sc=8) 00:22:12.593 the state(6) to be set 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.593 starting I/O failed: -6 00:22:12.593 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 [2024-12-11 10:00:21.771478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 [2024-12-11 10:00:21.772484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 [2024-12-11 10:00:21.773977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:12.594 NVMe io qpair process completion error 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.594 starting I/O failed: -6 00:22:12.594 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 [2024-12-11 10:00:21.774933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 [2024-12-11 10:00:21.775815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.595 Write completed with error (sct=0, sc=8) 00:22:12.595 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 [2024-12-11 10:00:21.776840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 [2024-12-11 10:00:21.778609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:12.596 NVMe io qpair process completion error 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 [2024-12-11 10:00:21.779519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 Write completed with error (sct=0, sc=8) 00:22:12.596 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 [2024-12-11 10:00:21.780436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 [2024-12-11 10:00:21.781450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.597 Write completed with error (sct=0, sc=8) 00:22:12.597 starting I/O failed: -6 00:22:12.598 [2024-12-11 10:00:21.783361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:12.598 NVMe io qpair process completion error 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 [2024-12-11 10:00:21.784370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 [2024-12-11 10:00:21.785288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 [2024-12-11 10:00:21.786263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.598 Write completed with error (sct=0, sc=8) 00:22:12.598 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 [2024-12-11 10:00:21.788097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:12.599 NVMe io qpair process completion error 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 [2024-12-11 10:00:21.789174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 [2024-12-11 10:00:21.789995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 starting I/O failed: -6 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.599 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 [2024-12-11 10:00:21.790982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 starting I/O failed: -6 00:22:12.600 [2024-12-11 10:00:21.793611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:12.600 NVMe io qpair process completion error 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.600 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 [2024-12-11 10:00:21.794499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:12.601 starting I/O failed: -6 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 [2024-12-11 10:00:21.795382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:12.601 starting I/O failed: -6 00:22:12.601 starting I/O failed: -6 00:22:12.601 starting I/O failed: -6 00:22:12.601 starting I/O failed: -6 00:22:12.601 starting I/O failed: -6 00:22:12.601 starting I/O failed: -6 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 [2024-12-11 10:00:21.796639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.601 Write completed with error (sct=0, sc=8) 00:22:12.601 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 [2024-12-11 10:00:21.799587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:12.602 NVMe io qpair process completion error 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 [2024-12-11 10:00:21.800588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 [2024-12-11 10:00:21.801525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 Write completed with error (sct=0, sc=8) 00:22:12.602 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 [2024-12-11 10:00:21.802517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 [2024-12-11 10:00:21.804498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:12.603 NVMe io qpair process completion error 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 starting I/O failed: -6 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.603 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 [2024-12-11 10:00:21.805495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 [2024-12-11 10:00:21.806428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 [2024-12-11 10:00:21.807400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.604 Write completed with error (sct=0, sc=8) 00:22:12.604 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 [2024-12-11 10:00:21.811746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:12.605 NVMe io qpair process completion error 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 [2024-12-11 10:00:21.812872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 [2024-12-11 10:00:21.813740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.605 starting I/O failed: -6 00:22:12.605 Write completed with error (sct=0, sc=8) 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 [2024-12-11 10:00:21.814742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 Write completed with error (sct=0, sc=8) 00:22:12.606 starting I/O failed: -6 00:22:12.606 [2024-12-11 10:00:21.818334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:12.606 NVMe io qpair process completion error 00:22:12.606 Initializing NVMe Controllers 00:22:12.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:12.606 Controller IO queue size 128, less than required. 00:22:12.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:12.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:12.606 Controller IO queue size 128, less than required. 00:22:12.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:12.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:12.606 Controller IO queue size 128, less than required. 00:22:12.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:12.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:12.606 Controller IO queue size 128, less than required. 00:22:12.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:12.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:12.606 Controller IO queue size 128, less than required. 00:22:12.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:12.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:12.606 Controller IO queue size 128, less than required. 00:22:12.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:12.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:12.606 Controller IO queue size 128, less than required. 00:22:12.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:12.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:12.606 Controller IO queue size 128, less than required. 00:22:12.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:12.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:12.606 Controller IO queue size 128, less than required. 00:22:12.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:12.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:12.606 Controller IO queue size 128, less than required. 00:22:12.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:12.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:12.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:12.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:12.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:12.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:12.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:12.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:12.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:12.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:12.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:12.607 Initialization complete. Launching workers. 00:22:12.607 ======================================================== 00:22:12.607 Latency(us) 00:22:12.607 Device Information : IOPS MiB/s Average min max 00:22:12.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2231.48 95.88 57364.86 891.22 103244.35 00:22:12.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2174.17 93.42 58886.06 902.56 112443.16 00:22:12.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2210.47 94.98 57931.50 678.27 111827.28 00:22:12.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2219.38 95.36 57712.97 856.07 110333.31 00:22:12.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2248.04 96.60 56991.33 842.56 110122.65 00:22:12.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2246.76 96.54 57048.28 645.38 108424.25 00:22:12.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2191.36 94.16 58516.46 714.16 109714.62 00:22:12.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2224.26 95.57 57666.99 944.89 107451.50 00:22:12.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2212.38 95.06 58022.60 881.31 107948.06 00:22:12.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2176.93 93.54 58338.00 708.46 105253.88 00:22:12.607 ======================================================== 00:22:12.607 Total : 22135.23 951.12 57841.51 645.38 112443.16 00:22:12.607 00:22:12.607 [2024-12-11 10:00:21.821968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a9560 is same with the state(6) to be set 00:22:12.607 [2024-12-11 10:00:21.822013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a9ef0 is same with the state(6) to be set 00:22:12.607 [2024-12-11 10:00:21.822045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aa740 is same with the state(6) to be set 00:22:12.607 [2024-12-11 10:00:21.822072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aa410 is same with the state(6) to be set 00:22:12.607 [2024-12-11 10:00:21.822099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9abae0 is same with the state(6) to be set 00:22:12.607 [2024-12-11 10:00:21.822127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aaa70 is same with the state(6) to be set 00:22:12.607 [2024-12-11 10:00:21.822155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ab900 is same with the state(6) to be set 00:22:12.607 [2024-12-11 10:00:21.822183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a9bc0 is same with the state(6) to be set 00:22:12.607 [2024-12-11 10:00:21.822210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a9890 is same with the state(6) to be set 00:22:12.607 [2024-12-11 10:00:21.822260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ab720 is same with the state(6) to be set 00:22:12.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:12.607 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 144110 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 144110 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 144110 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:13.986 rmmod nvme_tcp 00:22:13.986 rmmod nvme_fabrics 00:22:13.986 rmmod nvme_keyring 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 143798 ']' 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 143798 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 143798 ']' 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 143798 00:22:13.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (143798) - No such process 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 143798 is not found' 00:22:13.986 Process with pid 143798 is not found 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.986 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.892 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:15.892 00:22:15.892 real 0m10.417s 00:22:15.892 user 0m27.334s 00:22:15.892 sys 0m5.412s 00:22:15.892 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.892 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:15.892 ************************************ 00:22:15.892 END TEST nvmf_shutdown_tc4 00:22:15.892 ************************************ 00:22:15.892 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:15.892 00:22:15.892 real 0m41.602s 00:22:15.892 user 1m39.624s 00:22:15.892 sys 0m14.707s 00:22:15.892 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.892 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:15.892 ************************************ 00:22:15.892 END TEST nvmf_shutdown 00:22:15.892 ************************************ 00:22:15.892 10:00:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:15.892 10:00:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:15.892 10:00:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.892 10:00:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:15.892 ************************************ 00:22:15.892 START TEST nvmf_nsid 00:22:15.892 ************************************ 00:22:15.892 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:16.151 * Looking for test storage... 00:22:16.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:16.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.151 --rc genhtml_branch_coverage=1 00:22:16.151 --rc genhtml_function_coverage=1 00:22:16.151 --rc genhtml_legend=1 00:22:16.151 --rc geninfo_all_blocks=1 00:22:16.151 --rc geninfo_unexecuted_blocks=1 00:22:16.151 00:22:16.151 ' 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:16.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.151 --rc genhtml_branch_coverage=1 00:22:16.151 --rc genhtml_function_coverage=1 00:22:16.151 --rc genhtml_legend=1 00:22:16.151 --rc geninfo_all_blocks=1 00:22:16.151 --rc geninfo_unexecuted_blocks=1 00:22:16.151 00:22:16.151 ' 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:16.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.151 --rc genhtml_branch_coverage=1 00:22:16.151 --rc genhtml_function_coverage=1 00:22:16.151 --rc genhtml_legend=1 00:22:16.151 --rc geninfo_all_blocks=1 00:22:16.151 --rc geninfo_unexecuted_blocks=1 00:22:16.151 00:22:16.151 ' 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:16.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.151 --rc genhtml_branch_coverage=1 00:22:16.151 --rc genhtml_function_coverage=1 00:22:16.151 --rc genhtml_legend=1 00:22:16.151 --rc geninfo_all_blocks=1 00:22:16.151 --rc geninfo_unexecuted_blocks=1 00:22:16.151 00:22:16.151 ' 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.151 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:16.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:16.152 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:22.719 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:22.719 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:22.719 Found net devices under 0000:af:00.0: cvl_0_0 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:22.719 Found net devices under 0000:af:00.1: cvl_0_1 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:22.719 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:22.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:22:22.720 00:22:22.720 --- 10.0.0.2 ping statistics --- 00:22:22.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.720 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:22:22.720 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:22:22.979 00:22:22.979 --- 10.0.0.1 ping statistics --- 00:22:22.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.979 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=148948 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 148948 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 148948 ']' 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:22.979 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:22.979 [2024-12-11 10:00:32.397299] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:22:22.979 [2024-12-11 10:00:32.397346] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.979 [2024-12-11 10:00:32.482081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.979 [2024-12-11 10:00:32.522654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.979 [2024-12-11 10:00:32.522689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.979 [2024-12-11 10:00:32.522697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.979 [2024-12-11 10:00:32.522703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.979 [2024-12-11 10:00:32.522708] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.979 [2024-12-11 10:00:32.523232] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.238 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.238 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:23.238 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:23.238 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:23.238 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:23.238 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.238 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:23.238 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=149091 00:22:23.238 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:23.238 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:23.238 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:23.238 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:23.238 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:23.238 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:23.238 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=0f3359a0-cb9f-4e0d-a92a-4826cf26466d 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=a766f9dc-359e-49e4-b39c-aece95164276 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=991e76d1-e022-45da-8cae-0497b8fe4283 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:23.239 null0 00:22:23.239 null1 00:22:23.239 [2024-12-11 10:00:32.715224] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:22:23.239 [2024-12-11 10:00:32.715266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149091 ] 00:22:23.239 null2 00:22:23.239 [2024-12-11 10:00:32.723972] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.239 [2024-12-11 10:00:32.748180] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 149091 /var/tmp/tgt2.sock 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 149091 ']' 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:23.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.239 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:23.239 [2024-12-11 10:00:32.794692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.498 [2024-12-11 10:00:32.834928] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.498 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.498 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:23.498 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:24.065 [2024-12-11 10:00:33.360988] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.065 [2024-12-11 10:00:33.377073] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:24.065 nvme0n1 nvme0n2 00:22:24.065 nvme1n1 00:22:24.065 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:24.065 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:24.065 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 00:22:25.000 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:25.000 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:25.000 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:25.000 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:25.000 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:25.000 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:25.000 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:25.000 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:25.000 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:25.000 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:25.000 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:25.000 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:25.000 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:25.934 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:25.934 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 0f3359a0-cb9f-4e0d-a92a-4826cf26466d 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0f3359a0cb9f4e0da92a4826cf26466d 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0F3359A0CB9F4E0DA92A4826CF26466D 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 0F3359A0CB9F4E0DA92A4826CF26466D == \0\F\3\3\5\9\A\0\C\B\9\F\4\E\0\D\A\9\2\A\4\8\2\6\C\F\2\6\4\6\6\D ]] 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid a766f9dc-359e-49e4-b39c-aece95164276 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a766f9dc359e49e4b39caece95164276 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A766F9DC359E49E4B39CAECE95164276 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ A766F9DC359E49E4B39CAECE95164276 == \A\7\6\6\F\9\D\C\3\5\9\E\4\9\E\4\B\3\9\C\A\E\C\E\9\5\1\6\4\2\7\6 ]] 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 991e76d1-e022-45da-8cae-0497b8fe4283 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=991e76d1e02245da8cae0497b8fe4283 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 991E76D1E02245DA8CAE0497B8FE4283 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 991E76D1E02245DA8CAE0497B8FE4283 == \9\9\1\E\7\6\D\1\E\0\2\2\4\5\D\A\8\C\A\E\0\4\9\7\B\8\F\E\4\2\8\3 ]] 00:22:26.192 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:26.451 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:26.451 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:26.451 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 149091 00:22:26.451 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 149091 ']' 00:22:26.451 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 149091 00:22:26.451 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:26.451 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:26.451 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 149091 00:22:26.451 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:26.451 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:26.451 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 149091' 00:22:26.451 killing process with pid 149091 00:22:26.451 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 149091 00:22:26.451 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 149091 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:27.017 rmmod nvme_tcp 00:22:27.017 rmmod nvme_fabrics 00:22:27.017 rmmod nvme_keyring 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 148948 ']' 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 148948 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 148948 ']' 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 148948 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 148948 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 148948' 00:22:27.017 killing process with pid 148948 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 148948 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 148948 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.017 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.551 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:29.551 00:22:29.552 real 0m13.223s 00:22:29.552 user 0m9.906s 00:22:29.552 sys 0m6.167s 00:22:29.552 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:29.552 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:29.552 ************************************ 00:22:29.552 END TEST nvmf_nsid 00:22:29.552 ************************************ 00:22:29.552 10:00:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:29.552 00:22:29.552 real 12m25.276s 00:22:29.552 user 26m5.734s 00:22:29.552 sys 3m54.030s 00:22:29.552 10:00:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:29.552 10:00:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:29.552 ************************************ 00:22:29.552 END TEST nvmf_target_extra 00:22:29.552 ************************************ 00:22:29.552 10:00:38 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:29.552 10:00:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:29.552 10:00:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:29.552 10:00:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:29.552 ************************************ 00:22:29.552 START TEST nvmf_host 00:22:29.552 ************************************ 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:29.552 * Looking for test storage... 00:22:29.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:29.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.552 --rc genhtml_branch_coverage=1 00:22:29.552 --rc genhtml_function_coverage=1 00:22:29.552 --rc genhtml_legend=1 00:22:29.552 --rc geninfo_all_blocks=1 00:22:29.552 --rc geninfo_unexecuted_blocks=1 00:22:29.552 00:22:29.552 ' 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:29.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.552 --rc genhtml_branch_coverage=1 00:22:29.552 --rc genhtml_function_coverage=1 00:22:29.552 --rc genhtml_legend=1 00:22:29.552 --rc geninfo_all_blocks=1 00:22:29.552 --rc geninfo_unexecuted_blocks=1 00:22:29.552 00:22:29.552 ' 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:29.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.552 --rc genhtml_branch_coverage=1 00:22:29.552 --rc genhtml_function_coverage=1 00:22:29.552 --rc genhtml_legend=1 00:22:29.552 --rc geninfo_all_blocks=1 00:22:29.552 --rc geninfo_unexecuted_blocks=1 00:22:29.552 00:22:29.552 ' 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:29.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.552 --rc genhtml_branch_coverage=1 00:22:29.552 --rc genhtml_function_coverage=1 00:22:29.552 --rc genhtml_legend=1 00:22:29.552 --rc geninfo_all_blocks=1 00:22:29.552 --rc geninfo_unexecuted_blocks=1 00:22:29.552 00:22:29.552 ' 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:29.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:29.552 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:29.553 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:29.553 10:00:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:29.553 10:00:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:29.553 10:00:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:29.553 10:00:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:29.553 10:00:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:29.553 10:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:29.553 10:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:29.553 10:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.553 ************************************ 00:22:29.553 START TEST nvmf_multicontroller 00:22:29.553 ************************************ 00:22:29.553 10:00:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:29.553 * Looking for test storage... 00:22:29.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:29.553 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:29.553 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:22:29.553 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:29.812 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:29.812 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:29.812 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:29.812 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:29.812 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:29.812 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:29.812 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:29.812 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:29.812 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:29.812 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:29.812 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:29.812 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:29.812 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:29.812 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:29.812 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:29.812 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:29.812 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:29.812 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:29.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.813 --rc genhtml_branch_coverage=1 00:22:29.813 --rc genhtml_function_coverage=1 00:22:29.813 --rc genhtml_legend=1 00:22:29.813 --rc geninfo_all_blocks=1 00:22:29.813 --rc geninfo_unexecuted_blocks=1 00:22:29.813 00:22:29.813 ' 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:29.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.813 --rc genhtml_branch_coverage=1 00:22:29.813 --rc genhtml_function_coverage=1 00:22:29.813 --rc genhtml_legend=1 00:22:29.813 --rc geninfo_all_blocks=1 00:22:29.813 --rc geninfo_unexecuted_blocks=1 00:22:29.813 00:22:29.813 ' 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:29.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.813 --rc genhtml_branch_coverage=1 00:22:29.813 --rc genhtml_function_coverage=1 00:22:29.813 --rc genhtml_legend=1 00:22:29.813 --rc geninfo_all_blocks=1 00:22:29.813 --rc geninfo_unexecuted_blocks=1 00:22:29.813 00:22:29.813 ' 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:29.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.813 --rc genhtml_branch_coverage=1 00:22:29.813 --rc genhtml_function_coverage=1 00:22:29.813 --rc genhtml_legend=1 00:22:29.813 --rc geninfo_all_blocks=1 00:22:29.813 --rc geninfo_unexecuted_blocks=1 00:22:29.813 00:22:29.813 ' 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:29.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:29.813 10:00:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:36.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:36.488 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:36.488 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:36.488 Found net devices under 0000:af:00.0: cvl_0_0 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:36.488 Found net devices under 0000:af:00.1: cvl_0_1 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:36.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:22:36.488 00:22:36.488 --- 10.0.0.2 ping statistics --- 00:22:36.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.488 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:36.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:22:36.488 00:22:36.488 --- 10.0.0.1 ping statistics --- 00:22:36.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.488 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:36.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:36.488 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:36.488 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:36.488 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:36.488 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:36.488 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=153682 00:22:36.488 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:36.488 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 153682 00:22:36.488 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 153682 ']' 00:22:36.488 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.488 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.488 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.488 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.488 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:36.747 [2024-12-11 10:00:46.081751] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:22:36.747 [2024-12-11 10:00:46.081799] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.747 [2024-12-11 10:00:46.167352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:36.747 [2024-12-11 10:00:46.209650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.747 [2024-12-11 10:00:46.209684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.747 [2024-12-11 10:00:46.209691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.747 [2024-12-11 10:00:46.209697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.747 [2024-12-11 10:00:46.209702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.747 [2024-12-11 10:00:46.211109] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.747 [2024-12-11 10:00:46.211214] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.747 [2024-12-11 10:00:46.211216] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:37.684 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.684 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:37.684 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:37.684 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:37.684 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.684 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.684 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:37.684 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.684 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.684 [2024-12-11 10:00:46.952810] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.684 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.684 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:37.684 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.684 10:00:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.684 Malloc0 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.684 [2024-12-11 10:00:47.026801] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.684 [2024-12-11 10:00:47.038734] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.684 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.685 Malloc1 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=153923 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 153923 /var/tmp/bdevperf.sock 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 153923 ']' 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:37.685 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.944 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.944 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:37.944 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:37.944 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.944 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.203 NVMe0n1 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.203 1 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.203 request: 00:22:38.203 { 00:22:38.203 "name": "NVMe0", 00:22:38.203 "trtype": "tcp", 00:22:38.203 "traddr": "10.0.0.2", 00:22:38.203 "adrfam": "ipv4", 00:22:38.203 "trsvcid": "4420", 00:22:38.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.203 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:38.203 "hostaddr": "10.0.0.1", 00:22:38.203 "prchk_reftag": false, 00:22:38.203 "prchk_guard": false, 00:22:38.203 "hdgst": false, 00:22:38.203 "ddgst": false, 00:22:38.203 "allow_unrecognized_csi": false, 00:22:38.203 "method": "bdev_nvme_attach_controller", 00:22:38.203 "req_id": 1 00:22:38.203 } 00:22:38.203 Got JSON-RPC error response 00:22:38.203 response: 00:22:38.203 { 00:22:38.203 "code": -114, 00:22:38.203 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:38.203 } 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.203 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.203 request: 00:22:38.203 { 00:22:38.203 "name": "NVMe0", 00:22:38.203 "trtype": "tcp", 00:22:38.203 "traddr": "10.0.0.2", 00:22:38.203 "adrfam": "ipv4", 00:22:38.203 "trsvcid": "4420", 00:22:38.203 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:38.203 "hostaddr": "10.0.0.1", 00:22:38.203 "prchk_reftag": false, 00:22:38.203 "prchk_guard": false, 00:22:38.203 "hdgst": false, 00:22:38.203 "ddgst": false, 00:22:38.203 "allow_unrecognized_csi": false, 00:22:38.203 "method": "bdev_nvme_attach_controller", 00:22:38.203 "req_id": 1 00:22:38.203 } 00:22:38.204 Got JSON-RPC error response 00:22:38.204 response: 00:22:38.204 { 00:22:38.204 "code": -114, 00:22:38.204 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:38.204 } 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.204 request: 00:22:38.204 { 00:22:38.204 "name": "NVMe0", 00:22:38.204 "trtype": "tcp", 00:22:38.204 "traddr": "10.0.0.2", 00:22:38.204 "adrfam": "ipv4", 00:22:38.204 "trsvcid": "4420", 00:22:38.204 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.204 "hostaddr": "10.0.0.1", 00:22:38.204 "prchk_reftag": false, 00:22:38.204 "prchk_guard": false, 00:22:38.204 "hdgst": false, 00:22:38.204 "ddgst": false, 00:22:38.204 "multipath": "disable", 00:22:38.204 "allow_unrecognized_csi": false, 00:22:38.204 "method": "bdev_nvme_attach_controller", 00:22:38.204 "req_id": 1 00:22:38.204 } 00:22:38.204 Got JSON-RPC error response 00:22:38.204 response: 00:22:38.204 { 00:22:38.204 "code": -114, 00:22:38.204 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:38.204 } 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.204 request: 00:22:38.204 { 00:22:38.204 "name": "NVMe0", 00:22:38.204 "trtype": "tcp", 00:22:38.204 "traddr": "10.0.0.2", 00:22:38.204 "adrfam": "ipv4", 00:22:38.204 "trsvcid": "4420", 00:22:38.204 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.204 "hostaddr": "10.0.0.1", 00:22:38.204 "prchk_reftag": false, 00:22:38.204 "prchk_guard": false, 00:22:38.204 "hdgst": false, 00:22:38.204 "ddgst": false, 00:22:38.204 "multipath": "failover", 00:22:38.204 "allow_unrecognized_csi": false, 00:22:38.204 "method": "bdev_nvme_attach_controller", 00:22:38.204 "req_id": 1 00:22:38.204 } 00:22:38.204 Got JSON-RPC error response 00:22:38.204 response: 00:22:38.204 { 00:22:38.204 "code": -114, 00:22:38.204 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:38.204 } 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.204 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.463 NVMe0n1 00:22:38.463 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.463 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:38.463 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.463 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.463 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.463 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:38.463 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.463 10:00:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.463 00:22:38.463 10:00:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.463 10:00:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:38.463 10:00:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:38.463 10:00:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.463 10:00:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.463 10:00:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.463 10:00:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:38.463 10:00:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:39.841 { 00:22:39.841 "results": [ 00:22:39.841 { 00:22:39.841 "job": "NVMe0n1", 00:22:39.841 "core_mask": "0x1", 00:22:39.841 "workload": "write", 00:22:39.841 "status": "finished", 00:22:39.841 "queue_depth": 128, 00:22:39.841 "io_size": 4096, 00:22:39.841 "runtime": 1.004344, 00:22:39.841 "iops": 24996.415570760615, 00:22:39.841 "mibps": 97.64224832328365, 00:22:39.841 "io_failed": 0, 00:22:39.841 "io_timeout": 0, 00:22:39.841 "avg_latency_us": 5113.873694521106, 00:22:39.841 "min_latency_us": 1458.9561904761904, 00:22:39.841 "max_latency_us": 8862.96380952381 00:22:39.841 } 00:22:39.841 ], 00:22:39.841 "core_count": 1 00:22:39.841 } 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 153923 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 153923 ']' 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 153923 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 153923 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 153923' 00:22:39.841 killing process with pid 153923 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 153923 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 153923 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:39.841 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:40.100 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:40.100 [2024-12-11 10:00:47.144640] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:22:40.100 [2024-12-11 10:00:47.144688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153923 ] 00:22:40.100 [2024-12-11 10:00:47.223731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.100 [2024-12-11 10:00:47.265109] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.100 [2024-12-11 10:00:48.004868] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name ca0c1f6d-cca6-4a6e-badb-e12c220b91df already exists 00:22:40.100 [2024-12-11 10:00:48.004897] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:ca0c1f6d-cca6-4a6e-badb-e12c220b91df alias for bdev NVMe1n1 00:22:40.100 [2024-12-11 10:00:48.004905] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:40.100 Running I/O for 1 seconds... 00:22:40.100 24977.00 IOPS, 97.57 MiB/s 00:22:40.100 Latency(us) 00:22:40.100 [2024-12-11T09:00:49.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.100 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:40.100 NVMe0n1 : 1.00 24996.42 97.64 0.00 0.00 5113.87 1458.96 8862.96 00:22:40.100 [2024-12-11T09:00:49.675Z] =================================================================================================================== 00:22:40.100 [2024-12-11T09:00:49.675Z] Total : 24996.42 97.64 0.00 0.00 5113.87 1458.96 8862.96 00:22:40.100 Received shutdown signal, test time was about 1.000000 seconds 00:22:40.100 00:22:40.100 Latency(us) 00:22:40.100 [2024-12-11T09:00:49.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.100 [2024-12-11T09:00:49.675Z] =================================================================================================================== 00:22:40.100 [2024-12-11T09:00:49.675Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.100 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:40.100 rmmod nvme_tcp 00:22:40.100 rmmod nvme_fabrics 00:22:40.100 rmmod nvme_keyring 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 153682 ']' 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 153682 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 153682 ']' 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 153682 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 153682 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 153682' 00:22:40.100 killing process with pid 153682 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 153682 00:22:40.100 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 153682 00:22:40.360 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:40.360 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:40.360 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:40.360 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:40.360 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:40.360 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:40.360 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:40.360 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:40.360 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:40.360 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.360 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.360 10:00:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.277 10:00:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:42.277 00:22:42.277 real 0m12.861s 00:22:42.277 user 0m15.041s 00:22:42.277 sys 0m5.893s 00:22:42.277 10:00:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:42.277 10:00:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:42.277 ************************************ 00:22:42.277 END TEST nvmf_multicontroller 00:22:42.277 ************************************ 00:22:42.537 10:00:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:42.537 10:00:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:42.537 10:00:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:42.537 10:00:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.537 ************************************ 00:22:42.537 START TEST nvmf_aer 00:22:42.537 ************************************ 00:22:42.537 10:00:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:42.537 * Looking for test storage... 00:22:42.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:42.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.537 --rc genhtml_branch_coverage=1 00:22:42.537 --rc genhtml_function_coverage=1 00:22:42.537 --rc genhtml_legend=1 00:22:42.537 --rc geninfo_all_blocks=1 00:22:42.537 --rc geninfo_unexecuted_blocks=1 00:22:42.537 00:22:42.537 ' 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:42.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.537 --rc genhtml_branch_coverage=1 00:22:42.537 --rc genhtml_function_coverage=1 00:22:42.537 --rc genhtml_legend=1 00:22:42.537 --rc geninfo_all_blocks=1 00:22:42.537 --rc geninfo_unexecuted_blocks=1 00:22:42.537 00:22:42.537 ' 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:42.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.537 --rc genhtml_branch_coverage=1 00:22:42.537 --rc genhtml_function_coverage=1 00:22:42.537 --rc genhtml_legend=1 00:22:42.537 --rc geninfo_all_blocks=1 00:22:42.537 --rc geninfo_unexecuted_blocks=1 00:22:42.537 00:22:42.537 ' 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:42.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.537 --rc genhtml_branch_coverage=1 00:22:42.537 --rc genhtml_function_coverage=1 00:22:42.537 --rc genhtml_legend=1 00:22:42.537 --rc geninfo_all_blocks=1 00:22:42.537 --rc geninfo_unexecuted_blocks=1 00:22:42.537 00:22:42.537 ' 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.537 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:42.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:42.797 10:00:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.366 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.366 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:49.366 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:49.366 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:49.366 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:49.366 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:49.366 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:49.366 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:49.366 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:49.366 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:49.366 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:49.366 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:49.366 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:49.366 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:49.366 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:49.366 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.366 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.366 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:49.367 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:49.367 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:49.367 Found net devices under 0000:af:00.0: cvl_0_0 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:49.367 Found net devices under 0000:af:00.1: cvl_0_1 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:49.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:22:49.367 00:22:49.367 --- 10.0.0.2 ping statistics --- 00:22:49.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.367 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:22:49.367 00:22:49.367 --- 10.0.0.1 ping statistics --- 00:22:49.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.367 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=158169 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 158169 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 158169 ']' 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.367 10:00:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.367 [2024-12-11 10:00:58.857513] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:22:49.367 [2024-12-11 10:00:58.857561] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.626 [2024-12-11 10:00:58.942389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.627 [2024-12-11 10:00:58.983539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.627 [2024-12-11 10:00:58.983575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.627 [2024-12-11 10:00:58.983582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.627 [2024-12-11 10:00:58.983588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.627 [2024-12-11 10:00:58.983593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.627 [2024-12-11 10:00:58.985104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.627 [2024-12-11 10:00:58.985229] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.627 [2024-12-11 10:00:58.985320] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.627 [2024-12-11 10:00:58.985322] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.627 [2024-12-11 10:00:59.122573] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.627 Malloc0 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.627 [2024-12-11 10:00:59.186999] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.627 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.627 [ 00:22:49.627 { 00:22:49.627 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:49.627 "subtype": "Discovery", 00:22:49.627 "listen_addresses": [], 00:22:49.627 "allow_any_host": true, 00:22:49.627 "hosts": [] 00:22:49.627 }, 00:22:49.627 { 00:22:49.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.627 "subtype": "NVMe", 00:22:49.627 "listen_addresses": [ 00:22:49.627 { 00:22:49.627 "trtype": "TCP", 00:22:49.886 "adrfam": "IPv4", 00:22:49.886 "traddr": "10.0.0.2", 00:22:49.886 "trsvcid": "4420" 00:22:49.886 } 00:22:49.886 ], 00:22:49.886 "allow_any_host": true, 00:22:49.886 "hosts": [], 00:22:49.886 "serial_number": "SPDK00000000000001", 00:22:49.886 "model_number": "SPDK bdev Controller", 00:22:49.886 "max_namespaces": 2, 00:22:49.886 "min_cntlid": 1, 00:22:49.886 "max_cntlid": 65519, 00:22:49.886 "namespaces": [ 00:22:49.886 { 00:22:49.886 "nsid": 1, 00:22:49.886 "bdev_name": "Malloc0", 00:22:49.886 "name": "Malloc0", 00:22:49.886 "nguid": "406338EBE0FB460780160494ABC2A121", 00:22:49.886 "uuid": "406338eb-e0fb-4607-8016-0494abc2a121" 00:22:49.886 } 00:22:49.886 ] 00:22:49.886 } 00:22:49.886 ] 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=158201 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.886 Malloc1 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.886 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:50.146 Asynchronous Event Request test 00:22:50.146 Attaching to 10.0.0.2 00:22:50.146 Attached to 10.0.0.2 00:22:50.146 Registering asynchronous event callbacks... 00:22:50.146 Starting namespace attribute notice tests for all controllers... 00:22:50.146 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:50.146 aer_cb - Changed Namespace 00:22:50.146 Cleaning up... 00:22:50.146 [ 00:22:50.146 { 00:22:50.146 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:50.146 "subtype": "Discovery", 00:22:50.146 "listen_addresses": [], 00:22:50.146 "allow_any_host": true, 00:22:50.146 "hosts": [] 00:22:50.146 }, 00:22:50.146 { 00:22:50.146 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.146 "subtype": "NVMe", 00:22:50.146 "listen_addresses": [ 00:22:50.146 { 00:22:50.146 "trtype": "TCP", 00:22:50.146 "adrfam": "IPv4", 00:22:50.146 "traddr": "10.0.0.2", 00:22:50.146 "trsvcid": "4420" 00:22:50.146 } 00:22:50.146 ], 00:22:50.146 "allow_any_host": true, 00:22:50.146 "hosts": [], 00:22:50.146 "serial_number": "SPDK00000000000001", 00:22:50.146 "model_number": "SPDK bdev Controller", 00:22:50.146 "max_namespaces": 2, 00:22:50.146 "min_cntlid": 1, 00:22:50.146 "max_cntlid": 65519, 00:22:50.146 "namespaces": [ 00:22:50.146 { 00:22:50.146 "nsid": 1, 00:22:50.146 "bdev_name": "Malloc0", 00:22:50.146 "name": "Malloc0", 00:22:50.146 "nguid": "406338EBE0FB460780160494ABC2A121", 00:22:50.146 "uuid": "406338eb-e0fb-4607-8016-0494abc2a121" 00:22:50.146 }, 00:22:50.146 { 00:22:50.146 "nsid": 2, 00:22:50.146 "bdev_name": "Malloc1", 00:22:50.146 "name": "Malloc1", 00:22:50.146 "nguid": "CB75833A781347BCB4586DDD1136C877", 00:22:50.146 "uuid": "cb75833a-7813-47bc-b458-6ddd1136c877" 00:22:50.146 } 00:22:50.146 ] 00:22:50.146 } 00:22:50.146 ] 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 158201 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:50.146 rmmod nvme_tcp 00:22:50.146 rmmod nvme_fabrics 00:22:50.146 rmmod nvme_keyring 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 158169 ']' 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 158169 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 158169 ']' 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 158169 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 158169 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 158169' 00:22:50.146 killing process with pid 158169 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 158169 00:22:50.146 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 158169 00:22:50.406 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:50.406 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:50.406 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:50.406 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:50.406 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:50.406 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:50.406 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:50.406 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:50.406 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:50.406 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.406 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.406 10:00:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.943 10:01:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:52.943 00:22:52.943 real 0m10.003s 00:22:52.943 user 0m5.343s 00:22:52.943 sys 0m5.444s 00:22:52.943 10:01:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:52.943 10:01:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:52.943 ************************************ 00:22:52.943 END TEST nvmf_aer 00:22:52.943 ************************************ 00:22:52.943 10:01:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:52.943 10:01:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:52.943 10:01:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:52.943 10:01:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.943 ************************************ 00:22:52.943 START TEST nvmf_async_init 00:22:52.943 ************************************ 00:22:52.943 10:01:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:52.943 * Looking for test storage... 00:22:52.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.943 --rc genhtml_branch_coverage=1 00:22:52.943 --rc genhtml_function_coverage=1 00:22:52.943 --rc genhtml_legend=1 00:22:52.943 --rc geninfo_all_blocks=1 00:22:52.943 --rc geninfo_unexecuted_blocks=1 00:22:52.943 00:22:52.943 ' 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.943 --rc genhtml_branch_coverage=1 00:22:52.943 --rc genhtml_function_coverage=1 00:22:52.943 --rc genhtml_legend=1 00:22:52.943 --rc geninfo_all_blocks=1 00:22:52.943 --rc geninfo_unexecuted_blocks=1 00:22:52.943 00:22:52.943 ' 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.943 --rc genhtml_branch_coverage=1 00:22:52.943 --rc genhtml_function_coverage=1 00:22:52.943 --rc genhtml_legend=1 00:22:52.943 --rc geninfo_all_blocks=1 00:22:52.943 --rc geninfo_unexecuted_blocks=1 00:22:52.943 00:22:52.943 ' 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.943 --rc genhtml_branch_coverage=1 00:22:52.943 --rc genhtml_function_coverage=1 00:22:52.943 --rc genhtml_legend=1 00:22:52.943 --rc geninfo_all_blocks=1 00:22:52.943 --rc geninfo_unexecuted_blocks=1 00:22:52.943 00:22:52.943 ' 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:52.943 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:52.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=543342dee0484cd4965ba77bbf2bcd09 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:52.944 10:01:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:59.515 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:59.515 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:59.515 Found net devices under 0000:af:00.0: cvl_0_0 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.515 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:59.516 Found net devices under 0000:af:00.1: cvl_0_1 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:59.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:22:59.516 00:22:59.516 --- 10.0.0.2 ping statistics --- 00:22:59.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.516 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:22:59.516 00:22:59.516 --- 10.0.0.1 ping statistics --- 00:22:59.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.516 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=162201 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 162201 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 162201 ']' 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.516 10:01:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.516 [2024-12-11 10:01:09.021109] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:22:59.516 [2024-12-11 10:01:09.021158] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.776 [2024-12-11 10:01:09.105618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.776 [2024-12-11 10:01:09.144957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.776 [2024-12-11 10:01:09.144992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.776 [2024-12-11 10:01:09.144999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.776 [2024-12-11 10:01:09.145005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.776 [2024-12-11 10:01:09.145010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.776 [2024-12-11 10:01:09.145572] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.776 [2024-12-11 10:01:09.277656] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.776 null0 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 543342dee0484cd4965ba77bbf2bcd09 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:59.776 [2024-12-11 10:01:09.321892] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.776 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.034 nvme0n1 00:23:00.034 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.034 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:00.034 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.034 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.034 [ 00:23:00.034 { 00:23:00.034 "name": "nvme0n1", 00:23:00.034 "aliases": [ 00:23:00.034 "543342de-e048-4cd4-965b-a77bbf2bcd09" 00:23:00.034 ], 00:23:00.034 "product_name": "NVMe disk", 00:23:00.034 "block_size": 512, 00:23:00.034 "num_blocks": 2097152, 00:23:00.034 "uuid": "543342de-e048-4cd4-965b-a77bbf2bcd09", 00:23:00.034 "numa_id": 1, 00:23:00.034 "assigned_rate_limits": { 00:23:00.034 "rw_ios_per_sec": 0, 00:23:00.034 "rw_mbytes_per_sec": 0, 00:23:00.034 "r_mbytes_per_sec": 0, 00:23:00.034 "w_mbytes_per_sec": 0 00:23:00.034 }, 00:23:00.034 "claimed": false, 00:23:00.034 "zoned": false, 00:23:00.034 "supported_io_types": { 00:23:00.034 "read": true, 00:23:00.034 "write": true, 00:23:00.034 "unmap": false, 00:23:00.034 "flush": true, 00:23:00.034 "reset": true, 00:23:00.034 "nvme_admin": true, 00:23:00.034 "nvme_io": true, 00:23:00.034 "nvme_io_md": false, 00:23:00.034 "write_zeroes": true, 00:23:00.034 "zcopy": false, 00:23:00.034 "get_zone_info": false, 00:23:00.034 "zone_management": false, 00:23:00.034 "zone_append": false, 00:23:00.034 "compare": true, 00:23:00.034 "compare_and_write": true, 00:23:00.034 "abort": true, 00:23:00.034 "seek_hole": false, 00:23:00.034 "seek_data": false, 00:23:00.034 "copy": true, 00:23:00.034 "nvme_iov_md": false 00:23:00.034 }, 00:23:00.034 "memory_domains": [ 00:23:00.034 { 00:23:00.034 "dma_device_id": "system", 00:23:00.034 "dma_device_type": 1 00:23:00.034 } 00:23:00.034 ], 00:23:00.034 "driver_specific": { 00:23:00.034 "nvme": [ 00:23:00.034 { 00:23:00.034 "trid": { 00:23:00.034 "trtype": "TCP", 00:23:00.034 "adrfam": "IPv4", 00:23:00.034 "traddr": "10.0.0.2", 00:23:00.034 "trsvcid": "4420", 00:23:00.034 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:00.034 }, 00:23:00.034 "ctrlr_data": { 00:23:00.034 "cntlid": 1, 00:23:00.034 "vendor_id": "0x8086", 00:23:00.034 "model_number": "SPDK bdev Controller", 00:23:00.034 "serial_number": "00000000000000000000", 00:23:00.034 "firmware_revision": "25.01", 00:23:00.034 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:00.034 "oacs": { 00:23:00.034 "security": 0, 00:23:00.034 "format": 0, 00:23:00.034 "firmware": 0, 00:23:00.034 "ns_manage": 0 00:23:00.034 }, 00:23:00.034 "multi_ctrlr": true, 00:23:00.034 "ana_reporting": false 00:23:00.034 }, 00:23:00.034 "vs": { 00:23:00.034 "nvme_version": "1.3" 00:23:00.035 }, 00:23:00.035 "ns_data": { 00:23:00.035 "id": 1, 00:23:00.035 "can_share": true 00:23:00.035 } 00:23:00.035 } 00:23:00.035 ], 00:23:00.035 "mp_policy": "active_passive" 00:23:00.035 } 00:23:00.035 } 00:23:00.035 ] 00:23:00.035 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.035 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:00.035 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.035 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.035 [2024-12-11 10:01:09.587406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:00.035 [2024-12-11 10:01:09.587461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b18f0 (9): Bad file descriptor 00:23:00.294 [2024-12-11 10:01:09.719299] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.294 [ 00:23:00.294 { 00:23:00.294 "name": "nvme0n1", 00:23:00.294 "aliases": [ 00:23:00.294 "543342de-e048-4cd4-965b-a77bbf2bcd09" 00:23:00.294 ], 00:23:00.294 "product_name": "NVMe disk", 00:23:00.294 "block_size": 512, 00:23:00.294 "num_blocks": 2097152, 00:23:00.294 "uuid": "543342de-e048-4cd4-965b-a77bbf2bcd09", 00:23:00.294 "numa_id": 1, 00:23:00.294 "assigned_rate_limits": { 00:23:00.294 "rw_ios_per_sec": 0, 00:23:00.294 "rw_mbytes_per_sec": 0, 00:23:00.294 "r_mbytes_per_sec": 0, 00:23:00.294 "w_mbytes_per_sec": 0 00:23:00.294 }, 00:23:00.294 "claimed": false, 00:23:00.294 "zoned": false, 00:23:00.294 "supported_io_types": { 00:23:00.294 "read": true, 00:23:00.294 "write": true, 00:23:00.294 "unmap": false, 00:23:00.294 "flush": true, 00:23:00.294 "reset": true, 00:23:00.294 "nvme_admin": true, 00:23:00.294 "nvme_io": true, 00:23:00.294 "nvme_io_md": false, 00:23:00.294 "write_zeroes": true, 00:23:00.294 "zcopy": false, 00:23:00.294 "get_zone_info": false, 00:23:00.294 "zone_management": false, 00:23:00.294 "zone_append": false, 00:23:00.294 "compare": true, 00:23:00.294 "compare_and_write": true, 00:23:00.294 "abort": true, 00:23:00.294 "seek_hole": false, 00:23:00.294 "seek_data": false, 00:23:00.294 "copy": true, 00:23:00.294 "nvme_iov_md": false 00:23:00.294 }, 00:23:00.294 "memory_domains": [ 00:23:00.294 { 00:23:00.294 "dma_device_id": "system", 00:23:00.294 "dma_device_type": 1 00:23:00.294 } 00:23:00.294 ], 00:23:00.294 "driver_specific": { 00:23:00.294 "nvme": [ 00:23:00.294 { 00:23:00.294 "trid": { 00:23:00.294 "trtype": "TCP", 00:23:00.294 "adrfam": "IPv4", 00:23:00.294 "traddr": "10.0.0.2", 00:23:00.294 "trsvcid": "4420", 00:23:00.294 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:00.294 }, 00:23:00.294 "ctrlr_data": { 00:23:00.294 "cntlid": 2, 00:23:00.294 "vendor_id": "0x8086", 00:23:00.294 "model_number": "SPDK bdev Controller", 00:23:00.294 "serial_number": "00000000000000000000", 00:23:00.294 "firmware_revision": "25.01", 00:23:00.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:00.294 "oacs": { 00:23:00.294 "security": 0, 00:23:00.294 "format": 0, 00:23:00.294 "firmware": 0, 00:23:00.294 "ns_manage": 0 00:23:00.294 }, 00:23:00.294 "multi_ctrlr": true, 00:23:00.294 "ana_reporting": false 00:23:00.294 }, 00:23:00.294 "vs": { 00:23:00.294 "nvme_version": "1.3" 00:23:00.294 }, 00:23:00.294 "ns_data": { 00:23:00.294 "id": 1, 00:23:00.294 "can_share": true 00:23:00.294 } 00:23:00.294 } 00:23:00.294 ], 00:23:00.294 "mp_policy": "active_passive" 00:23:00.294 } 00:23:00.294 } 00:23:00.294 ] 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.R9zmwgHUl0 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.R9zmwgHUl0 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.R9zmwgHUl0 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.294 [2024-12-11 10:01:09.792016] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:00.294 [2024-12-11 10:01:09.799933] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.294 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.294 [2024-12-11 10:01:09.812111] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.554 nvme0n1 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.554 [ 00:23:00.554 { 00:23:00.554 "name": "nvme0n1", 00:23:00.554 "aliases": [ 00:23:00.554 "543342de-e048-4cd4-965b-a77bbf2bcd09" 00:23:00.554 ], 00:23:00.554 "product_name": "NVMe disk", 00:23:00.554 "block_size": 512, 00:23:00.554 "num_blocks": 2097152, 00:23:00.554 "uuid": "543342de-e048-4cd4-965b-a77bbf2bcd09", 00:23:00.554 "numa_id": 1, 00:23:00.554 "assigned_rate_limits": { 00:23:00.554 "rw_ios_per_sec": 0, 00:23:00.554 "rw_mbytes_per_sec": 0, 00:23:00.554 "r_mbytes_per_sec": 0, 00:23:00.554 "w_mbytes_per_sec": 0 00:23:00.554 }, 00:23:00.554 "claimed": false, 00:23:00.554 "zoned": false, 00:23:00.554 "supported_io_types": { 00:23:00.554 "read": true, 00:23:00.554 "write": true, 00:23:00.554 "unmap": false, 00:23:00.554 "flush": true, 00:23:00.554 "reset": true, 00:23:00.554 "nvme_admin": true, 00:23:00.554 "nvme_io": true, 00:23:00.554 "nvme_io_md": false, 00:23:00.554 "write_zeroes": true, 00:23:00.554 "zcopy": false, 00:23:00.554 "get_zone_info": false, 00:23:00.554 "zone_management": false, 00:23:00.554 "zone_append": false, 00:23:00.554 "compare": true, 00:23:00.554 "compare_and_write": true, 00:23:00.554 "abort": true, 00:23:00.554 "seek_hole": false, 00:23:00.554 "seek_data": false, 00:23:00.554 "copy": true, 00:23:00.554 "nvme_iov_md": false 00:23:00.554 }, 00:23:00.554 "memory_domains": [ 00:23:00.554 { 00:23:00.554 "dma_device_id": "system", 00:23:00.554 "dma_device_type": 1 00:23:00.554 } 00:23:00.554 ], 00:23:00.554 "driver_specific": { 00:23:00.554 "nvme": [ 00:23:00.554 { 00:23:00.554 "trid": { 00:23:00.554 "trtype": "TCP", 00:23:00.554 "adrfam": "IPv4", 00:23:00.554 "traddr": "10.0.0.2", 00:23:00.554 "trsvcid": "4421", 00:23:00.554 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:00.554 }, 00:23:00.554 "ctrlr_data": { 00:23:00.554 "cntlid": 3, 00:23:00.554 "vendor_id": "0x8086", 00:23:00.554 "model_number": "SPDK bdev Controller", 00:23:00.554 "serial_number": "00000000000000000000", 00:23:00.554 "firmware_revision": "25.01", 00:23:00.554 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:00.554 "oacs": { 00:23:00.554 "security": 0, 00:23:00.554 "format": 0, 00:23:00.554 "firmware": 0, 00:23:00.554 "ns_manage": 0 00:23:00.554 }, 00:23:00.554 "multi_ctrlr": true, 00:23:00.554 "ana_reporting": false 00:23:00.554 }, 00:23:00.554 "vs": { 00:23:00.554 "nvme_version": "1.3" 00:23:00.554 }, 00:23:00.554 "ns_data": { 00:23:00.554 "id": 1, 00:23:00.554 "can_share": true 00:23:00.554 } 00:23:00.554 } 00:23:00.554 ], 00:23:00.554 "mp_policy": "active_passive" 00:23:00.554 } 00:23:00.554 } 00:23:00.554 ] 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.R9zmwgHUl0 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:00.554 rmmod nvme_tcp 00:23:00.554 rmmod nvme_fabrics 00:23:00.554 rmmod nvme_keyring 00:23:00.554 10:01:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:00.554 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:00.554 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:00.554 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 162201 ']' 00:23:00.554 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 162201 00:23:00.554 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 162201 ']' 00:23:00.554 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 162201 00:23:00.554 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:00.554 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.554 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 162201 00:23:00.554 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:00.554 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:00.554 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 162201' 00:23:00.554 killing process with pid 162201 00:23:00.554 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 162201 00:23:00.554 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 162201 00:23:00.814 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:00.814 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:00.814 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:00.814 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:00.814 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:00.814 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:00.814 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:00.814 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:00.814 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:00.814 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.814 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.814 10:01:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.718 10:01:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:02.718 00:23:02.718 real 0m10.288s 00:23:02.718 user 0m3.328s 00:23:02.718 sys 0m5.363s 00:23:02.718 10:01:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:02.718 10:01:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:02.718 ************************************ 00:23:02.718 END TEST nvmf_async_init 00:23:02.718 ************************************ 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.976 ************************************ 00:23:02.976 START TEST dma 00:23:02.976 ************************************ 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:02.976 * Looking for test storage... 00:23:02.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:02.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.976 --rc genhtml_branch_coverage=1 00:23:02.976 --rc genhtml_function_coverage=1 00:23:02.976 --rc genhtml_legend=1 00:23:02.976 --rc geninfo_all_blocks=1 00:23:02.976 --rc geninfo_unexecuted_blocks=1 00:23:02.976 00:23:02.976 ' 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:02.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.976 --rc genhtml_branch_coverage=1 00:23:02.976 --rc genhtml_function_coverage=1 00:23:02.976 --rc genhtml_legend=1 00:23:02.976 --rc geninfo_all_blocks=1 00:23:02.976 --rc geninfo_unexecuted_blocks=1 00:23:02.976 00:23:02.976 ' 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:02.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.976 --rc genhtml_branch_coverage=1 00:23:02.976 --rc genhtml_function_coverage=1 00:23:02.976 --rc genhtml_legend=1 00:23:02.976 --rc geninfo_all_blocks=1 00:23:02.976 --rc geninfo_unexecuted_blocks=1 00:23:02.976 00:23:02.976 ' 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:02.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.976 --rc genhtml_branch_coverage=1 00:23:02.976 --rc genhtml_function_coverage=1 00:23:02.976 --rc genhtml_legend=1 00:23:02.976 --rc geninfo_all_blocks=1 00:23:02.976 --rc geninfo_unexecuted_blocks=1 00:23:02.976 00:23:02.976 ' 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.976 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:03.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:03.235 00:23:03.235 real 0m0.216s 00:23:03.235 user 0m0.132s 00:23:03.235 sys 0m0.097s 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:03.235 ************************************ 00:23:03.235 END TEST dma 00:23:03.235 ************************************ 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.235 ************************************ 00:23:03.235 START TEST nvmf_identify 00:23:03.235 ************************************ 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:03.235 * Looking for test storage... 00:23:03.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:03.235 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:03.236 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:03.236 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:03.236 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:03.236 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:03.236 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:03.236 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:03.236 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:03.236 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:03.236 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:03.236 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:03.236 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:03.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.236 --rc genhtml_branch_coverage=1 00:23:03.236 --rc genhtml_function_coverage=1 00:23:03.236 --rc genhtml_legend=1 00:23:03.236 --rc geninfo_all_blocks=1 00:23:03.236 --rc geninfo_unexecuted_blocks=1 00:23:03.236 00:23:03.236 ' 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:03.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.496 --rc genhtml_branch_coverage=1 00:23:03.496 --rc genhtml_function_coverage=1 00:23:03.496 --rc genhtml_legend=1 00:23:03.496 --rc geninfo_all_blocks=1 00:23:03.496 --rc geninfo_unexecuted_blocks=1 00:23:03.496 00:23:03.496 ' 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:03.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.496 --rc genhtml_branch_coverage=1 00:23:03.496 --rc genhtml_function_coverage=1 00:23:03.496 --rc genhtml_legend=1 00:23:03.496 --rc geninfo_all_blocks=1 00:23:03.496 --rc geninfo_unexecuted_blocks=1 00:23:03.496 00:23:03.496 ' 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:03.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.496 --rc genhtml_branch_coverage=1 00:23:03.496 --rc genhtml_function_coverage=1 00:23:03.496 --rc genhtml_legend=1 00:23:03.496 --rc geninfo_all_blocks=1 00:23:03.496 --rc geninfo_unexecuted_blocks=1 00:23:03.496 00:23:03.496 ' 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.496 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:03.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:03.497 10:01:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:10.063 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.063 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:10.063 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:10.063 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:10.063 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:10.063 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:10.063 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:10.063 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:10.063 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:10.063 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:10.063 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:10.063 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:10.063 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:10.063 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:10.063 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:10.063 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:10.064 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:10.064 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:10.064 Found net devices under 0000:af:00.0: cvl_0_0 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:10.064 Found net devices under 0000:af:00.1: cvl_0_1 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:10.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:23:10.064 00:23:10.064 --- 10.0.0.2 ping statistics --- 00:23:10.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.064 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:10.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:23:10.064 00:23:10.064 --- 10.0.0.1 ping statistics --- 00:23:10.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.064 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=166393 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 166393 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 166393 ']' 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.064 10:01:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:10.324 [2024-12-11 10:01:19.655666] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:23:10.324 [2024-12-11 10:01:19.655711] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.324 [2024-12-11 10:01:19.741701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:10.324 [2024-12-11 10:01:19.784433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.324 [2024-12-11 10:01:19.784473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.324 [2024-12-11 10:01:19.784480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.324 [2024-12-11 10:01:19.784486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.324 [2024-12-11 10:01:19.784491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.324 [2024-12-11 10:01:19.785886] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.324 [2024-12-11 10:01:19.786000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.324 [2024-12-11 10:01:19.786106] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.324 [2024-12-11 10:01:19.786107] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.263 [2024-12-11 10:01:20.505219] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.263 Malloc0 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.263 [2024-12-11 10:01:20.614991] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.263 [ 00:23:11.263 { 00:23:11.263 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:11.263 "subtype": "Discovery", 00:23:11.263 "listen_addresses": [ 00:23:11.263 { 00:23:11.263 "trtype": "TCP", 00:23:11.263 "adrfam": "IPv4", 00:23:11.263 "traddr": "10.0.0.2", 00:23:11.263 "trsvcid": "4420" 00:23:11.263 } 00:23:11.263 ], 00:23:11.263 "allow_any_host": true, 00:23:11.263 "hosts": [] 00:23:11.263 }, 00:23:11.263 { 00:23:11.263 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.263 "subtype": "NVMe", 00:23:11.263 "listen_addresses": [ 00:23:11.263 { 00:23:11.263 "trtype": "TCP", 00:23:11.263 "adrfam": "IPv4", 00:23:11.263 "traddr": "10.0.0.2", 00:23:11.263 "trsvcid": "4420" 00:23:11.263 } 00:23:11.263 ], 00:23:11.263 "allow_any_host": true, 00:23:11.263 "hosts": [], 00:23:11.263 "serial_number": "SPDK00000000000001", 00:23:11.263 "model_number": "SPDK bdev Controller", 00:23:11.263 "max_namespaces": 32, 00:23:11.263 "min_cntlid": 1, 00:23:11.263 "max_cntlid": 65519, 00:23:11.263 "namespaces": [ 00:23:11.263 { 00:23:11.263 "nsid": 1, 00:23:11.263 "bdev_name": "Malloc0", 00:23:11.263 "name": "Malloc0", 00:23:11.263 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:11.263 "eui64": "ABCDEF0123456789", 00:23:11.263 "uuid": "efdc8e19-2978-4aea-ba80-20b295bae548" 00:23:11.263 } 00:23:11.263 ] 00:23:11.263 } 00:23:11.263 ] 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.263 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:11.263 [2024-12-11 10:01:20.672790] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:23:11.263 [2024-12-11 10:01:20.672840] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166532 ] 00:23:11.263 [2024-12-11 10:01:20.714260] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:11.263 [2024-12-11 10:01:20.714302] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:11.264 [2024-12-11 10:01:20.714307] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:11.264 [2024-12-11 10:01:20.714318] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:11.264 [2024-12-11 10:01:20.714327] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:11.264 [2024-12-11 10:01:20.718450] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:11.264 [2024-12-11 10:01:20.718485] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x232b690 0 00:23:11.264 [2024-12-11 10:01:20.725229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:11.264 [2024-12-11 10:01:20.725244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:11.264 [2024-12-11 10:01:20.725249] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:11.264 [2024-12-11 10:01:20.725252] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:11.264 [2024-12-11 10:01:20.725279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.725285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.725288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232b690) 00:23:11.264 [2024-12-11 10:01:20.725301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:11.264 [2024-12-11 10:01:20.725317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d100, cid 0, qid 0 00:23:11.264 [2024-12-11 10:01:20.732225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.264 [2024-12-11 10:01:20.732235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.264 [2024-12-11 10:01:20.732239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.732243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d100) on tqpair=0x232b690 00:23:11.264 [2024-12-11 10:01:20.732258] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:11.264 [2024-12-11 10:01:20.732265] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:11.264 [2024-12-11 10:01:20.732270] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:11.264 [2024-12-11 10:01:20.732280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.732283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.732287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232b690) 00:23:11.264 [2024-12-11 10:01:20.732294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.264 [2024-12-11 10:01:20.732306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d100, cid 0, qid 0 00:23:11.264 [2024-12-11 10:01:20.732520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.264 [2024-12-11 10:01:20.732527] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.264 [2024-12-11 10:01:20.732530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.732533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d100) on tqpair=0x232b690 00:23:11.264 [2024-12-11 10:01:20.732538] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:11.264 [2024-12-11 10:01:20.732544] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:11.264 [2024-12-11 10:01:20.732550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.732554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.732557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232b690) 00:23:11.264 [2024-12-11 10:01:20.732562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.264 [2024-12-11 10:01:20.732572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d100, cid 0, qid 0 00:23:11.264 [2024-12-11 10:01:20.732667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.264 [2024-12-11 10:01:20.732672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.264 [2024-12-11 10:01:20.732675] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.732679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d100) on tqpair=0x232b690 00:23:11.264 [2024-12-11 10:01:20.732683] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:11.264 [2024-12-11 10:01:20.732690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:11.264 [2024-12-11 10:01:20.732696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.732700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.732705] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232b690) 00:23:11.264 [2024-12-11 10:01:20.732711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.264 [2024-12-11 10:01:20.732720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d100, cid 0, qid 0 00:23:11.264 [2024-12-11 10:01:20.732780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.264 [2024-12-11 10:01:20.732786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.264 [2024-12-11 10:01:20.732789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.732792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d100) on tqpair=0x232b690 00:23:11.264 [2024-12-11 10:01:20.732799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:11.264 [2024-12-11 10:01:20.732807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.732810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.732814] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232b690) 00:23:11.264 [2024-12-11 10:01:20.732819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.264 [2024-12-11 10:01:20.732828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d100, cid 0, qid 0 00:23:11.264 [2024-12-11 10:01:20.732918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.264 [2024-12-11 10:01:20.732924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.264 [2024-12-11 10:01:20.732927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.732930] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d100) on tqpair=0x232b690 00:23:11.264 [2024-12-11 10:01:20.732934] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:11.264 [2024-12-11 10:01:20.732938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:11.264 [2024-12-11 10:01:20.732945] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:11.264 [2024-12-11 10:01:20.733052] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:11.264 [2024-12-11 10:01:20.733057] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:11.264 [2024-12-11 10:01:20.733064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.733068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.733070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232b690) 00:23:11.264 [2024-12-11 10:01:20.733076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.264 [2024-12-11 10:01:20.733086] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d100, cid 0, qid 0 00:23:11.264 [2024-12-11 10:01:20.733151] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.264 [2024-12-11 10:01:20.733157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.264 [2024-12-11 10:01:20.733160] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.733163] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d100) on tqpair=0x232b690 00:23:11.264 [2024-12-11 10:01:20.733167] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:11.264 [2024-12-11 10:01:20.733174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.733178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.733181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232b690) 00:23:11.264 [2024-12-11 10:01:20.733187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.264 [2024-12-11 10:01:20.733196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d100, cid 0, qid 0 00:23:11.264 [2024-12-11 10:01:20.733302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.264 [2024-12-11 10:01:20.733308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.264 [2024-12-11 10:01:20.733313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.733317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d100) on tqpair=0x232b690 00:23:11.264 [2024-12-11 10:01:20.733320] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:11.264 [2024-12-11 10:01:20.733325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:11.264 [2024-12-11 10:01:20.733331] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:11.264 [2024-12-11 10:01:20.733338] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:11.264 [2024-12-11 10:01:20.733345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.733349] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232b690) 00:23:11.264 [2024-12-11 10:01:20.733354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.264 [2024-12-11 10:01:20.733365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d100, cid 0, qid 0 00:23:11.264 [2024-12-11 10:01:20.733449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.264 [2024-12-11 10:01:20.733455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.264 [2024-12-11 10:01:20.733458] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.264 [2024-12-11 10:01:20.733461] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232b690): datao=0, datal=4096, cccid=0 00:23:11.264 [2024-12-11 10:01:20.733465] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238d100) on tqpair(0x232b690): expected_datao=0, payload_size=4096 00:23:11.265 [2024-12-11 10:01:20.733469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.733487] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.733492] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.733553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.265 [2024-12-11 10:01:20.733558] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.265 [2024-12-11 10:01:20.733562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.733566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d100) on tqpair=0x232b690 00:23:11.265 [2024-12-11 10:01:20.733572] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:11.265 [2024-12-11 10:01:20.733577] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:11.265 [2024-12-11 10:01:20.733581] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:11.265 [2024-12-11 10:01:20.733585] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:11.265 [2024-12-11 10:01:20.733589] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:11.265 [2024-12-11 10:01:20.733594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:11.265 [2024-12-11 10:01:20.733603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:11.265 [2024-12-11 10:01:20.733610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.733614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.733617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232b690) 00:23:11.265 [2024-12-11 10:01:20.733627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:11.265 [2024-12-11 10:01:20.733637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d100, cid 0, qid 0 00:23:11.265 [2024-12-11 10:01:20.733703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.265 [2024-12-11 10:01:20.733709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.265 [2024-12-11 10:01:20.733712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.733715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d100) on tqpair=0x232b690 00:23:11.265 [2024-12-11 10:01:20.733722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.733725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.733728] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x232b690) 00:23:11.265 [2024-12-11 10:01:20.733733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.265 [2024-12-11 10:01:20.733738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.733742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.733744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x232b690) 00:23:11.265 [2024-12-11 10:01:20.733749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.265 [2024-12-11 10:01:20.733754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.733757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.733760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x232b690) 00:23:11.265 [2024-12-11 10:01:20.733765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.265 [2024-12-11 10:01:20.733770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.733773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.733776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232b690) 00:23:11.265 [2024-12-11 10:01:20.733781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.265 [2024-12-11 10:01:20.733785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:11.265 [2024-12-11 10:01:20.733796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:11.265 [2024-12-11 10:01:20.733801] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.733804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232b690) 00:23:11.265 [2024-12-11 10:01:20.733810] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.265 [2024-12-11 10:01:20.733821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d100, cid 0, qid 0 00:23:11.265 [2024-12-11 10:01:20.733825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d280, cid 1, qid 0 00:23:11.265 [2024-12-11 10:01:20.733829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d400, cid 2, qid 0 00:23:11.265 [2024-12-11 10:01:20.733834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d580, cid 3, qid 0 00:23:11.265 [2024-12-11 10:01:20.733838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d700, cid 4, qid 0 00:23:11.265 [2024-12-11 10:01:20.733958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.265 [2024-12-11 10:01:20.733966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.265 [2024-12-11 10:01:20.733969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.733973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d700) on tqpair=0x232b690 00:23:11.265 [2024-12-11 10:01:20.733977] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:11.265 [2024-12-11 10:01:20.733981] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:11.265 [2024-12-11 10:01:20.733990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.733993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232b690) 00:23:11.265 [2024-12-11 10:01:20.733999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.265 [2024-12-11 10:01:20.734008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d700, cid 4, qid 0 00:23:11.265 [2024-12-11 10:01:20.734084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.265 [2024-12-11 10:01:20.734091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.265 [2024-12-11 10:01:20.734093] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.734096] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232b690): datao=0, datal=4096, cccid=4 00:23:11.265 [2024-12-11 10:01:20.734100] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238d700) on tqpair(0x232b690): expected_datao=0, payload_size=4096 00:23:11.265 [2024-12-11 10:01:20.734104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.734110] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.734113] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.734158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.265 [2024-12-11 10:01:20.734163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.265 [2024-12-11 10:01:20.734166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.734170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d700) on tqpair=0x232b690 00:23:11.265 [2024-12-11 10:01:20.734179] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:11.265 [2024-12-11 10:01:20.734200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.734204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232b690) 00:23:11.265 [2024-12-11 10:01:20.734209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.265 [2024-12-11 10:01:20.734215] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.734223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.734226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x232b690) 00:23:11.265 [2024-12-11 10:01:20.734231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.265 [2024-12-11 10:01:20.734245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d700, cid 4, qid 0 00:23:11.265 [2024-12-11 10:01:20.734249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d880, cid 5, qid 0 00:23:11.265 [2024-12-11 10:01:20.734366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.265 [2024-12-11 10:01:20.734373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.265 [2024-12-11 10:01:20.734375] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.734378] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232b690): datao=0, datal=1024, cccid=4 00:23:11.265 [2024-12-11 10:01:20.734385] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238d700) on tqpair(0x232b690): expected_datao=0, payload_size=1024 00:23:11.265 [2024-12-11 10:01:20.734388] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.734394] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.734397] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.734402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.265 [2024-12-11 10:01:20.734406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.265 [2024-12-11 10:01:20.734409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.734413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d880) on tqpair=0x232b690 00:23:11.265 [2024-12-11 10:01:20.776394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.265 [2024-12-11 10:01:20.776405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.265 [2024-12-11 10:01:20.776409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.776413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d700) on tqpair=0x232b690 00:23:11.265 [2024-12-11 10:01:20.776425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.265 [2024-12-11 10:01:20.776428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232b690) 00:23:11.265 [2024-12-11 10:01:20.776436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.265 [2024-12-11 10:01:20.776451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d700, cid 4, qid 0 00:23:11.265 [2024-12-11 10:01:20.776523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.265 [2024-12-11 10:01:20.776528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.265 [2024-12-11 10:01:20.776531] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.266 [2024-12-11 10:01:20.776535] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232b690): datao=0, datal=3072, cccid=4 00:23:11.266 [2024-12-11 10:01:20.776539] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238d700) on tqpair(0x232b690): expected_datao=0, payload_size=3072 00:23:11.266 [2024-12-11 10:01:20.776542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.266 [2024-12-11 10:01:20.776567] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.266 [2024-12-11 10:01:20.776571] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.266 [2024-12-11 10:01:20.776648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.266 [2024-12-11 10:01:20.776653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.266 [2024-12-11 10:01:20.776656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.266 [2024-12-11 10:01:20.776660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d700) on tqpair=0x232b690 00:23:11.266 [2024-12-11 10:01:20.776667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.266 [2024-12-11 10:01:20.776671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x232b690) 00:23:11.266 [2024-12-11 10:01:20.776676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.266 [2024-12-11 10:01:20.776691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d700, cid 4, qid 0 00:23:11.266 [2024-12-11 10:01:20.776760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.266 [2024-12-11 10:01:20.776766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.266 [2024-12-11 10:01:20.776769] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.266 [2024-12-11 10:01:20.776772] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x232b690): datao=0, datal=8, cccid=4 00:23:11.266 [2024-12-11 10:01:20.776779] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x238d700) on tqpair(0x232b690): expected_datao=0, payload_size=8 00:23:11.266 [2024-12-11 10:01:20.776783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.266 [2024-12-11 10:01:20.776788] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.266 [2024-12-11 10:01:20.776791] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.266 [2024-12-11 10:01:20.818412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.266 [2024-12-11 10:01:20.818424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.266 [2024-12-11 10:01:20.818427] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.266 [2024-12-11 10:01:20.818431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d700) on tqpair=0x232b690 00:23:11.266 ===================================================== 00:23:11.266 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:11.266 ===================================================== 00:23:11.266 Controller Capabilities/Features 00:23:11.266 ================================ 00:23:11.266 Vendor ID: 0000 00:23:11.266 Subsystem Vendor ID: 0000 00:23:11.266 Serial Number: .................... 00:23:11.266 Model Number: ........................................ 00:23:11.266 Firmware Version: 25.01 00:23:11.266 Recommended Arb Burst: 0 00:23:11.266 IEEE OUI Identifier: 00 00 00 00:23:11.266 Multi-path I/O 00:23:11.266 May have multiple subsystem ports: No 00:23:11.266 May have multiple controllers: No 00:23:11.266 Associated with SR-IOV VF: No 00:23:11.266 Max Data Transfer Size: 131072 00:23:11.266 Max Number of Namespaces: 0 00:23:11.266 Max Number of I/O Queues: 1024 00:23:11.266 NVMe Specification Version (VS): 1.3 00:23:11.266 NVMe Specification Version (Identify): 1.3 00:23:11.266 Maximum Queue Entries: 128 00:23:11.266 Contiguous Queues Required: Yes 00:23:11.266 Arbitration Mechanisms Supported 00:23:11.266 Weighted Round Robin: Not Supported 00:23:11.266 Vendor Specific: Not Supported 00:23:11.266 Reset Timeout: 15000 ms 00:23:11.266 Doorbell Stride: 4 bytes 00:23:11.266 NVM Subsystem Reset: Not Supported 00:23:11.266 Command Sets Supported 00:23:11.266 NVM Command Set: Supported 00:23:11.266 Boot Partition: Not Supported 00:23:11.266 Memory Page Size Minimum: 4096 bytes 00:23:11.266 Memory Page Size Maximum: 4096 bytes 00:23:11.266 Persistent Memory Region: Not Supported 00:23:11.266 Optional Asynchronous Events Supported 00:23:11.266 Namespace Attribute Notices: Not Supported 00:23:11.266 Firmware Activation Notices: Not Supported 00:23:11.266 ANA Change Notices: Not Supported 00:23:11.266 PLE Aggregate Log Change Notices: Not Supported 00:23:11.266 LBA Status Info Alert Notices: Not Supported 00:23:11.266 EGE Aggregate Log Change Notices: Not Supported 00:23:11.266 Normal NVM Subsystem Shutdown event: Not Supported 00:23:11.266 Zone Descriptor Change Notices: Not Supported 00:23:11.266 Discovery Log Change Notices: Supported 00:23:11.266 Controller Attributes 00:23:11.266 128-bit Host Identifier: Not Supported 00:23:11.266 Non-Operational Permissive Mode: Not Supported 00:23:11.266 NVM Sets: Not Supported 00:23:11.266 Read Recovery Levels: Not Supported 00:23:11.266 Endurance Groups: Not Supported 00:23:11.266 Predictable Latency Mode: Not Supported 00:23:11.266 Traffic Based Keep ALive: Not Supported 00:23:11.266 Namespace Granularity: Not Supported 00:23:11.266 SQ Associations: Not Supported 00:23:11.266 UUID List: Not Supported 00:23:11.266 Multi-Domain Subsystem: Not Supported 00:23:11.266 Fixed Capacity Management: Not Supported 00:23:11.266 Variable Capacity Management: Not Supported 00:23:11.266 Delete Endurance Group: Not Supported 00:23:11.266 Delete NVM Set: Not Supported 00:23:11.266 Extended LBA Formats Supported: Not Supported 00:23:11.266 Flexible Data Placement Supported: Not Supported 00:23:11.266 00:23:11.266 Controller Memory Buffer Support 00:23:11.266 ================================ 00:23:11.266 Supported: No 00:23:11.266 00:23:11.266 Persistent Memory Region Support 00:23:11.266 ================================ 00:23:11.266 Supported: No 00:23:11.266 00:23:11.266 Admin Command Set Attributes 00:23:11.266 ============================ 00:23:11.266 Security Send/Receive: Not Supported 00:23:11.266 Format NVM: Not Supported 00:23:11.266 Firmware Activate/Download: Not Supported 00:23:11.266 Namespace Management: Not Supported 00:23:11.266 Device Self-Test: Not Supported 00:23:11.266 Directives: Not Supported 00:23:11.266 NVMe-MI: Not Supported 00:23:11.266 Virtualization Management: Not Supported 00:23:11.266 Doorbell Buffer Config: Not Supported 00:23:11.266 Get LBA Status Capability: Not Supported 00:23:11.266 Command & Feature Lockdown Capability: Not Supported 00:23:11.266 Abort Command Limit: 1 00:23:11.266 Async Event Request Limit: 4 00:23:11.266 Number of Firmware Slots: N/A 00:23:11.266 Firmware Slot 1 Read-Only: N/A 00:23:11.266 Firmware Activation Without Reset: N/A 00:23:11.266 Multiple Update Detection Support: N/A 00:23:11.266 Firmware Update Granularity: No Information Provided 00:23:11.266 Per-Namespace SMART Log: No 00:23:11.266 Asymmetric Namespace Access Log Page: Not Supported 00:23:11.266 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:11.266 Command Effects Log Page: Not Supported 00:23:11.266 Get Log Page Extended Data: Supported 00:23:11.266 Telemetry Log Pages: Not Supported 00:23:11.266 Persistent Event Log Pages: Not Supported 00:23:11.266 Supported Log Pages Log Page: May Support 00:23:11.266 Commands Supported & Effects Log Page: Not Supported 00:23:11.266 Feature Identifiers & Effects Log Page:May Support 00:23:11.266 NVMe-MI Commands & Effects Log Page: May Support 00:23:11.266 Data Area 4 for Telemetry Log: Not Supported 00:23:11.266 Error Log Page Entries Supported: 128 00:23:11.266 Keep Alive: Not Supported 00:23:11.266 00:23:11.266 NVM Command Set Attributes 00:23:11.266 ========================== 00:23:11.266 Submission Queue Entry Size 00:23:11.266 Max: 1 00:23:11.266 Min: 1 00:23:11.266 Completion Queue Entry Size 00:23:11.266 Max: 1 00:23:11.266 Min: 1 00:23:11.266 Number of Namespaces: 0 00:23:11.266 Compare Command: Not Supported 00:23:11.266 Write Uncorrectable Command: Not Supported 00:23:11.266 Dataset Management Command: Not Supported 00:23:11.266 Write Zeroes Command: Not Supported 00:23:11.266 Set Features Save Field: Not Supported 00:23:11.266 Reservations: Not Supported 00:23:11.266 Timestamp: Not Supported 00:23:11.266 Copy: Not Supported 00:23:11.266 Volatile Write Cache: Not Present 00:23:11.266 Atomic Write Unit (Normal): 1 00:23:11.266 Atomic Write Unit (PFail): 1 00:23:11.266 Atomic Compare & Write Unit: 1 00:23:11.266 Fused Compare & Write: Supported 00:23:11.266 Scatter-Gather List 00:23:11.266 SGL Command Set: Supported 00:23:11.266 SGL Keyed: Supported 00:23:11.266 SGL Bit Bucket Descriptor: Not Supported 00:23:11.266 SGL Metadata Pointer: Not Supported 00:23:11.266 Oversized SGL: Not Supported 00:23:11.266 SGL Metadata Address: Not Supported 00:23:11.266 SGL Offset: Supported 00:23:11.266 Transport SGL Data Block: Not Supported 00:23:11.266 Replay Protected Memory Block: Not Supported 00:23:11.266 00:23:11.266 Firmware Slot Information 00:23:11.266 ========================= 00:23:11.266 Active slot: 0 00:23:11.266 00:23:11.266 00:23:11.266 Error Log 00:23:11.266 ========= 00:23:11.266 00:23:11.266 Active Namespaces 00:23:11.266 ================= 00:23:11.267 Discovery Log Page 00:23:11.267 ================== 00:23:11.267 Generation Counter: 2 00:23:11.267 Number of Records: 2 00:23:11.267 Record Format: 0 00:23:11.267 00:23:11.267 Discovery Log Entry 0 00:23:11.267 ---------------------- 00:23:11.267 Transport Type: 3 (TCP) 00:23:11.267 Address Family: 1 (IPv4) 00:23:11.267 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:11.267 Entry Flags: 00:23:11.267 Duplicate Returned Information: 1 00:23:11.267 Explicit Persistent Connection Support for Discovery: 1 00:23:11.267 Transport Requirements: 00:23:11.267 Secure Channel: Not Required 00:23:11.267 Port ID: 0 (0x0000) 00:23:11.267 Controller ID: 65535 (0xffff) 00:23:11.267 Admin Max SQ Size: 128 00:23:11.267 Transport Service Identifier: 4420 00:23:11.267 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:11.267 Transport Address: 10.0.0.2 00:23:11.267 Discovery Log Entry 1 00:23:11.267 ---------------------- 00:23:11.267 Transport Type: 3 (TCP) 00:23:11.267 Address Family: 1 (IPv4) 00:23:11.267 Subsystem Type: 2 (NVM Subsystem) 00:23:11.267 Entry Flags: 00:23:11.267 Duplicate Returned Information: 0 00:23:11.267 Explicit Persistent Connection Support for Discovery: 0 00:23:11.267 Transport Requirements: 00:23:11.267 Secure Channel: Not Required 00:23:11.267 Port ID: 0 (0x0000) 00:23:11.267 Controller ID: 65535 (0xffff) 00:23:11.267 Admin Max SQ Size: 128 00:23:11.267 Transport Service Identifier: 4420 00:23:11.267 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:11.267 Transport Address: 10.0.0.2 [2024-12-11 10:01:20.818508] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:11.267 [2024-12-11 10:01:20.818519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d100) on tqpair=0x232b690 00:23:11.267 [2024-12-11 10:01:20.818524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-12-11 10:01:20.818529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d280) on tqpair=0x232b690 00:23:11.267 [2024-12-11 10:01:20.818533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-12-11 10:01:20.818537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d400) on tqpair=0x232b690 00:23:11.267 [2024-12-11 10:01:20.818541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-12-11 10:01:20.818546] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d580) on tqpair=0x232b690 00:23:11.267 [2024-12-11 10:01:20.818550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.267 [2024-12-11 10:01:20.818557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.267 [2024-12-11 10:01:20.818560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.267 [2024-12-11 10:01:20.818564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232b690) 00:23:11.267 [2024-12-11 10:01:20.818570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.267 [2024-12-11 10:01:20.818584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d580, cid 3, qid 0 00:23:11.267 [2024-12-11 10:01:20.818718] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.267 [2024-12-11 10:01:20.818723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.267 [2024-12-11 10:01:20.818727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.267 [2024-12-11 10:01:20.818730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d580) on tqpair=0x232b690 00:23:11.267 [2024-12-11 10:01:20.818735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.267 [2024-12-11 10:01:20.818739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.267 [2024-12-11 10:01:20.818742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232b690) 00:23:11.267 [2024-12-11 10:01:20.818747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.267 [2024-12-11 10:01:20.818761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d580, cid 3, qid 0 00:23:11.267 [2024-12-11 10:01:20.818852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.267 [2024-12-11 10:01:20.818858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.267 [2024-12-11 10:01:20.818861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.267 [2024-12-11 10:01:20.818864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d580) on tqpair=0x232b690 00:23:11.267 [2024-12-11 10:01:20.818870] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:11.267 [2024-12-11 10:01:20.818874] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:11.267 [2024-12-11 10:01:20.818882] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.267 [2024-12-11 10:01:20.818886] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.267 [2024-12-11 10:01:20.818889] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232b690) 00:23:11.267 [2024-12-11 10:01:20.818894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.267 [2024-12-11 10:01:20.818903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d580, cid 3, qid 0 00:23:11.267 [2024-12-11 10:01:20.818963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.267 [2024-12-11 10:01:20.818969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.267 [2024-12-11 10:01:20.818972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.267 [2024-12-11 10:01:20.818975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d580) on tqpair=0x232b690 00:23:11.267 [2024-12-11 10:01:20.818984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.267 [2024-12-11 10:01:20.818987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.267 [2024-12-11 10:01:20.818990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232b690) 00:23:11.267 [2024-12-11 10:01:20.818996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.267 [2024-12-11 10:01:20.819005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d580, cid 3, qid 0 00:23:11.267 [2024-12-11 10:01:20.819102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.267 [2024-12-11 10:01:20.819108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.267 [2024-12-11 10:01:20.819111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.267 [2024-12-11 10:01:20.819114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d580) on tqpair=0x232b690 00:23:11.267 [2024-12-11 10:01:20.819121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.267 [2024-12-11 10:01:20.819125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.267 [2024-12-11 10:01:20.819128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232b690) 00:23:11.267 [2024-12-11 10:01:20.819134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.267 [2024-12-11 10:01:20.819143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d580, cid 3, qid 0 00:23:11.267 [2024-12-11 10:01:20.819205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.267 [2024-12-11 10:01:20.819211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.267 [2024-12-11 10:01:20.819214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.267 [2024-12-11 10:01:20.823223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d580) on tqpair=0x232b690 00:23:11.267 [2024-12-11 10:01:20.823235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.267 [2024-12-11 10:01:20.823239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.267 [2024-12-11 10:01:20.823242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x232b690) 00:23:11.267 [2024-12-11 10:01:20.823247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.267 [2024-12-11 10:01:20.823259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x238d580, cid 3, qid 0 00:23:11.267 [2024-12-11 10:01:20.823443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.267 [2024-12-11 10:01:20.823449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.267 [2024-12-11 10:01:20.823455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.267 [2024-12-11 10:01:20.823459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x238d580) on tqpair=0x232b690 00:23:11.267 [2024-12-11 10:01:20.823465] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:23:11.267 00:23:11.530 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:11.530 [2024-12-11 10:01:20.861518] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:23:11.530 [2024-12-11 10:01:20.861559] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166548 ] 00:23:11.530 [2024-12-11 10:01:20.901429] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:11.530 [2024-12-11 10:01:20.901464] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:11.530 [2024-12-11 10:01:20.901469] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:11.530 [2024-12-11 10:01:20.901479] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:11.530 [2024-12-11 10:01:20.901487] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:11.530 [2024-12-11 10:01:20.905369] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:11.530 [2024-12-11 10:01:20.905396] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x132b690 0 00:23:11.530 [2024-12-11 10:01:20.913230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:11.530 [2024-12-11 10:01:20.913246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:11.530 [2024-12-11 10:01:20.913250] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:11.530 [2024-12-11 10:01:20.913253] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:11.530 [2024-12-11 10:01:20.913278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.530 [2024-12-11 10:01:20.913283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.530 [2024-12-11 10:01:20.913286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b690) 00:23:11.530 [2024-12-11 10:01:20.913297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:11.530 [2024-12-11 10:01:20.913314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d100, cid 0, qid 0 00:23:11.530 [2024-12-11 10:01:20.920227] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.530 [2024-12-11 10:01:20.920236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.530 [2024-12-11 10:01:20.920239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.530 [2024-12-11 10:01:20.920243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d100) on tqpair=0x132b690 00:23:11.530 [2024-12-11 10:01:20.920251] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:11.530 [2024-12-11 10:01:20.920257] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:11.530 [2024-12-11 10:01:20.920262] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:11.530 [2024-12-11 10:01:20.920271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.530 [2024-12-11 10:01:20.920276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.920279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b690) 00:23:11.531 [2024-12-11 10:01:20.920286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.531 [2024-12-11 10:01:20.920299] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d100, cid 0, qid 0 00:23:11.531 [2024-12-11 10:01:20.920432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.531 [2024-12-11 10:01:20.920437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.531 [2024-12-11 10:01:20.920441] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.920444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d100) on tqpair=0x132b690 00:23:11.531 [2024-12-11 10:01:20.920448] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:11.531 [2024-12-11 10:01:20.920454] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:11.531 [2024-12-11 10:01:20.920461] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.920464] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.920467] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b690) 00:23:11.531 [2024-12-11 10:01:20.920473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.531 [2024-12-11 10:01:20.920483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d100, cid 0, qid 0 00:23:11.531 [2024-12-11 10:01:20.920547] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.531 [2024-12-11 10:01:20.920553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.531 [2024-12-11 10:01:20.920556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.920559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d100) on tqpair=0x132b690 00:23:11.531 [2024-12-11 10:01:20.920564] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:11.531 [2024-12-11 10:01:20.920570] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:11.531 [2024-12-11 10:01:20.920576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.920579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.920583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b690) 00:23:11.531 [2024-12-11 10:01:20.920588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.531 [2024-12-11 10:01:20.920597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d100, cid 0, qid 0 00:23:11.531 [2024-12-11 10:01:20.920664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.531 [2024-12-11 10:01:20.920670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.531 [2024-12-11 10:01:20.920673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.920676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d100) on tqpair=0x132b690 00:23:11.531 [2024-12-11 10:01:20.920680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:11.531 [2024-12-11 10:01:20.920688] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.920692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.920695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b690) 00:23:11.531 [2024-12-11 10:01:20.920701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.531 [2024-12-11 10:01:20.920712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d100, cid 0, qid 0 00:23:11.531 [2024-12-11 10:01:20.920781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.531 [2024-12-11 10:01:20.920787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.531 [2024-12-11 10:01:20.920790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.920793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d100) on tqpair=0x132b690 00:23:11.531 [2024-12-11 10:01:20.920797] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:11.531 [2024-12-11 10:01:20.920801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:11.531 [2024-12-11 10:01:20.920808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:11.531 [2024-12-11 10:01:20.920914] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:11.531 [2024-12-11 10:01:20.920919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:11.531 [2024-12-11 10:01:20.920925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.920928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.920932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b690) 00:23:11.531 [2024-12-11 10:01:20.920937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.531 [2024-12-11 10:01:20.920947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d100, cid 0, qid 0 00:23:11.531 [2024-12-11 10:01:20.921012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.531 [2024-12-11 10:01:20.921017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.531 [2024-12-11 10:01:20.921020] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.921024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d100) on tqpair=0x132b690 00:23:11.531 [2024-12-11 10:01:20.921028] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:11.531 [2024-12-11 10:01:20.921036] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.921039] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.921042] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b690) 00:23:11.531 [2024-12-11 10:01:20.921048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.531 [2024-12-11 10:01:20.921057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d100, cid 0, qid 0 00:23:11.531 [2024-12-11 10:01:20.921130] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.531 [2024-12-11 10:01:20.921136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.531 [2024-12-11 10:01:20.921139] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.921142] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d100) on tqpair=0x132b690 00:23:11.531 [2024-12-11 10:01:20.921146] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:11.531 [2024-12-11 10:01:20.921150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:11.531 [2024-12-11 10:01:20.921156] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:11.531 [2024-12-11 10:01:20.921164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:11.531 [2024-12-11 10:01:20.921172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.921175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b690) 00:23:11.531 [2024-12-11 10:01:20.921181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.531 [2024-12-11 10:01:20.921190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d100, cid 0, qid 0 00:23:11.531 [2024-12-11 10:01:20.921293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.531 [2024-12-11 10:01:20.921299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.531 [2024-12-11 10:01:20.921302] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.921306] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x132b690): datao=0, datal=4096, cccid=0 00:23:11.531 [2024-12-11 10:01:20.921310] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x138d100) on tqpair(0x132b690): expected_datao=0, payload_size=4096 00:23:11.531 [2024-12-11 10:01:20.921314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.921320] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.921323] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.921336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.531 [2024-12-11 10:01:20.921342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.531 [2024-12-11 10:01:20.921345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.921348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d100) on tqpair=0x132b690 00:23:11.531 [2024-12-11 10:01:20.921354] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:11.531 [2024-12-11 10:01:20.921359] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:11.531 [2024-12-11 10:01:20.921362] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:11.531 [2024-12-11 10:01:20.921366] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:11.531 [2024-12-11 10:01:20.921370] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:11.531 [2024-12-11 10:01:20.921374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:11.531 [2024-12-11 10:01:20.921384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:11.531 [2024-12-11 10:01:20.921392] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.921395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.921398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b690) 00:23:11.531 [2024-12-11 10:01:20.921404] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:11.531 [2024-12-11 10:01:20.921415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d100, cid 0, qid 0 00:23:11.531 [2024-12-11 10:01:20.921478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.531 [2024-12-11 10:01:20.921484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.531 [2024-12-11 10:01:20.921487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.921490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d100) on tqpair=0x132b690 00:23:11.531 [2024-12-11 10:01:20.921495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.531 [2024-12-11 10:01:20.921501] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.921505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b690) 00:23:11.532 [2024-12-11 10:01:20.921510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.532 [2024-12-11 10:01:20.921515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.921518] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.921521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x132b690) 00:23:11.532 [2024-12-11 10:01:20.921526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.532 [2024-12-11 10:01:20.921531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.921534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.921537] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x132b690) 00:23:11.532 [2024-12-11 10:01:20.921542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.532 [2024-12-11 10:01:20.921547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.921551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.921553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b690) 00:23:11.532 [2024-12-11 10:01:20.921558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.532 [2024-12-11 10:01:20.921562] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:11.532 [2024-12-11 10:01:20.921573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:11.532 [2024-12-11 10:01:20.921579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.921582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x132b690) 00:23:11.532 [2024-12-11 10:01:20.921587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.532 [2024-12-11 10:01:20.921598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d100, cid 0, qid 0 00:23:11.532 [2024-12-11 10:01:20.921603] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d280, cid 1, qid 0 00:23:11.532 [2024-12-11 10:01:20.921607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d400, cid 2, qid 0 00:23:11.532 [2024-12-11 10:01:20.921611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d580, cid 3, qid 0 00:23:11.532 [2024-12-11 10:01:20.921615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d700, cid 4, qid 0 00:23:11.532 [2024-12-11 10:01:20.921708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.532 [2024-12-11 10:01:20.921714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.532 [2024-12-11 10:01:20.921717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.921720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d700) on tqpair=0x132b690 00:23:11.532 [2024-12-11 10:01:20.921724] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:11.532 [2024-12-11 10:01:20.921729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:11.532 [2024-12-11 10:01:20.921738] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:11.532 [2024-12-11 10:01:20.921746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:11.532 [2024-12-11 10:01:20.921751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.921755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.921758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x132b690) 00:23:11.532 [2024-12-11 10:01:20.921763] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:11.532 [2024-12-11 10:01:20.921773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d700, cid 4, qid 0 00:23:11.532 [2024-12-11 10:01:20.921835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.532 [2024-12-11 10:01:20.921840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.532 [2024-12-11 10:01:20.921843] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.921847] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d700) on tqpair=0x132b690 00:23:11.532 [2024-12-11 10:01:20.921894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:11.532 [2024-12-11 10:01:20.921903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:11.532 [2024-12-11 10:01:20.921910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.921913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x132b690) 00:23:11.532 [2024-12-11 10:01:20.921919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.532 [2024-12-11 10:01:20.921928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d700, cid 4, qid 0 00:23:11.532 [2024-12-11 10:01:20.922006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.532 [2024-12-11 10:01:20.922012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.532 [2024-12-11 10:01:20.922015] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.922018] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x132b690): datao=0, datal=4096, cccid=4 00:23:11.532 [2024-12-11 10:01:20.922022] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x138d700) on tqpair(0x132b690): expected_datao=0, payload_size=4096 00:23:11.532 [2024-12-11 10:01:20.922026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.922031] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.922035] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.922043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.532 [2024-12-11 10:01:20.922048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.532 [2024-12-11 10:01:20.922051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.922054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d700) on tqpair=0x132b690 00:23:11.532 [2024-12-11 10:01:20.922064] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:11.532 [2024-12-11 10:01:20.922076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:11.532 [2024-12-11 10:01:20.922086] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:11.532 [2024-12-11 10:01:20.922091] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.922095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x132b690) 00:23:11.532 [2024-12-11 10:01:20.922100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.532 [2024-12-11 10:01:20.922111] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d700, cid 4, qid 0 00:23:11.532 [2024-12-11 10:01:20.922194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.532 [2024-12-11 10:01:20.922199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.532 [2024-12-11 10:01:20.922203] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.922205] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x132b690): datao=0, datal=4096, cccid=4 00:23:11.532 [2024-12-11 10:01:20.922209] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x138d700) on tqpair(0x132b690): expected_datao=0, payload_size=4096 00:23:11.532 [2024-12-11 10:01:20.922213] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.922230] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.922234] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.922265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.532 [2024-12-11 10:01:20.922271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.532 [2024-12-11 10:01:20.922274] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.922277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d700) on tqpair=0x132b690 00:23:11.532 [2024-12-11 10:01:20.922287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:11.532 [2024-12-11 10:01:20.922296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:11.532 [2024-12-11 10:01:20.922302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.922306] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x132b690) 00:23:11.532 [2024-12-11 10:01:20.922311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.532 [2024-12-11 10:01:20.922322] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d700, cid 4, qid 0 00:23:11.532 [2024-12-11 10:01:20.922398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.532 [2024-12-11 10:01:20.922404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.532 [2024-12-11 10:01:20.922407] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.922410] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x132b690): datao=0, datal=4096, cccid=4 00:23:11.532 [2024-12-11 10:01:20.922414] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x138d700) on tqpair(0x132b690): expected_datao=0, payload_size=4096 00:23:11.532 [2024-12-11 10:01:20.922417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.922423] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.922426] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.922438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.532 [2024-12-11 10:01:20.922444] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.532 [2024-12-11 10:01:20.922447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.532 [2024-12-11 10:01:20.922450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d700) on tqpair=0x132b690 00:23:11.532 [2024-12-11 10:01:20.922456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:11.532 [2024-12-11 10:01:20.922464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:11.532 [2024-12-11 10:01:20.922471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:11.532 [2024-12-11 10:01:20.922477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:11.533 [2024-12-11 10:01:20.922482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:11.533 [2024-12-11 10:01:20.922487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:11.533 [2024-12-11 10:01:20.922491] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:11.533 [2024-12-11 10:01:20.922495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:11.533 [2024-12-11 10:01:20.922500] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:11.533 [2024-12-11 10:01:20.922512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.922516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x132b690) 00:23:11.533 [2024-12-11 10:01:20.922521] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.533 [2024-12-11 10:01:20.922527] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.922530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.922533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x132b690) 00:23:11.533 [2024-12-11 10:01:20.922538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.533 [2024-12-11 10:01:20.922550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d700, cid 4, qid 0 00:23:11.533 [2024-12-11 10:01:20.922554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d880, cid 5, qid 0 00:23:11.533 [2024-12-11 10:01:20.922637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.533 [2024-12-11 10:01:20.922642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.533 [2024-12-11 10:01:20.922645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.922648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d700) on tqpair=0x132b690 00:23:11.533 [2024-12-11 10:01:20.922654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.533 [2024-12-11 10:01:20.922659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.533 [2024-12-11 10:01:20.922662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.922665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d880) on tqpair=0x132b690 00:23:11.533 [2024-12-11 10:01:20.922672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.922676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x132b690) 00:23:11.533 [2024-12-11 10:01:20.922681] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.533 [2024-12-11 10:01:20.922690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d880, cid 5, qid 0 00:23:11.533 [2024-12-11 10:01:20.922753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.533 [2024-12-11 10:01:20.922759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.533 [2024-12-11 10:01:20.922762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.922765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d880) on tqpair=0x132b690 00:23:11.533 [2024-12-11 10:01:20.922773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.922776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x132b690) 00:23:11.533 [2024-12-11 10:01:20.922783] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.533 [2024-12-11 10:01:20.922792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d880, cid 5, qid 0 00:23:11.533 [2024-12-11 10:01:20.922856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.533 [2024-12-11 10:01:20.922861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.533 [2024-12-11 10:01:20.922865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.922868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d880) on tqpair=0x132b690 00:23:11.533 [2024-12-11 10:01:20.922875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.922878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x132b690) 00:23:11.533 [2024-12-11 10:01:20.922884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.533 [2024-12-11 10:01:20.922893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d880, cid 5, qid 0 00:23:11.533 [2024-12-11 10:01:20.922959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.533 [2024-12-11 10:01:20.922965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.533 [2024-12-11 10:01:20.922968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.922971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d880) on tqpair=0x132b690 00:23:11.533 [2024-12-11 10:01:20.922983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.922988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x132b690) 00:23:11.533 [2024-12-11 10:01:20.922994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.533 [2024-12-11 10:01:20.923000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923003] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x132b690) 00:23:11.533 [2024-12-11 10:01:20.923008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.533 [2024-12-11 10:01:20.923014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x132b690) 00:23:11.533 [2024-12-11 10:01:20.923023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.533 [2024-12-11 10:01:20.923029] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x132b690) 00:23:11.533 [2024-12-11 10:01:20.923037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.533 [2024-12-11 10:01:20.923048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d880, cid 5, qid 0 00:23:11.533 [2024-12-11 10:01:20.923053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d700, cid 4, qid 0 00:23:11.533 [2024-12-11 10:01:20.923057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138da00, cid 6, qid 0 00:23:11.533 [2024-12-11 10:01:20.923061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138db80, cid 7, qid 0 00:23:11.533 [2024-12-11 10:01:20.923226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.533 [2024-12-11 10:01:20.923233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.533 [2024-12-11 10:01:20.923236] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923241] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x132b690): datao=0, datal=8192, cccid=5 00:23:11.533 [2024-12-11 10:01:20.923245] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x138d880) on tqpair(0x132b690): expected_datao=0, payload_size=8192 00:23:11.533 [2024-12-11 10:01:20.923249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923263] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923267] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.533 [2024-12-11 10:01:20.923277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.533 [2024-12-11 10:01:20.923280] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923283] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x132b690): datao=0, datal=512, cccid=4 00:23:11.533 [2024-12-11 10:01:20.923287] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x138d700) on tqpair(0x132b690): expected_datao=0, payload_size=512 00:23:11.533 [2024-12-11 10:01:20.923291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923296] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923299] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.533 [2024-12-11 10:01:20.923309] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.533 [2024-12-11 10:01:20.923312] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923314] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x132b690): datao=0, datal=512, cccid=6 00:23:11.533 [2024-12-11 10:01:20.923318] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x138da00) on tqpair(0x132b690): expected_datao=0, payload_size=512 00:23:11.533 [2024-12-11 10:01:20.923322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923327] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923330] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.533 [2024-12-11 10:01:20.923340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.533 [2024-12-11 10:01:20.923342] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923346] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x132b690): datao=0, datal=4096, cccid=7 00:23:11.533 [2024-12-11 10:01:20.923349] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x138db80) on tqpair(0x132b690): expected_datao=0, payload_size=4096 00:23:11.533 [2024-12-11 10:01:20.923353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923359] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923362] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.533 [2024-12-11 10:01:20.923374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.533 [2024-12-11 10:01:20.923377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d880) on tqpair=0x132b690 00:23:11.533 [2024-12-11 10:01:20.923392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.533 [2024-12-11 10:01:20.923397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.533 [2024-12-11 10:01:20.923400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d700) on tqpair=0x132b690 00:23:11.533 [2024-12-11 10:01:20.923412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.533 [2024-12-11 10:01:20.923418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.533 [2024-12-11 10:01:20.923421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.533 [2024-12-11 10:01:20.923425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138da00) on tqpair=0x132b690 00:23:11.534 [2024-12-11 10:01:20.923431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.534 [2024-12-11 10:01:20.923436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.534 [2024-12-11 10:01:20.923439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.534 [2024-12-11 10:01:20.923442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138db80) on tqpair=0x132b690 00:23:11.534 ===================================================== 00:23:11.534 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:11.534 ===================================================== 00:23:11.534 Controller Capabilities/Features 00:23:11.534 ================================ 00:23:11.534 Vendor ID: 8086 00:23:11.534 Subsystem Vendor ID: 8086 00:23:11.534 Serial Number: SPDK00000000000001 00:23:11.534 Model Number: SPDK bdev Controller 00:23:11.534 Firmware Version: 25.01 00:23:11.534 Recommended Arb Burst: 6 00:23:11.534 IEEE OUI Identifier: e4 d2 5c 00:23:11.534 Multi-path I/O 00:23:11.534 May have multiple subsystem ports: Yes 00:23:11.534 May have multiple controllers: Yes 00:23:11.534 Associated with SR-IOV VF: No 00:23:11.534 Max Data Transfer Size: 131072 00:23:11.534 Max Number of Namespaces: 32 00:23:11.534 Max Number of I/O Queues: 127 00:23:11.534 NVMe Specification Version (VS): 1.3 00:23:11.534 NVMe Specification Version (Identify): 1.3 00:23:11.534 Maximum Queue Entries: 128 00:23:11.534 Contiguous Queues Required: Yes 00:23:11.534 Arbitration Mechanisms Supported 00:23:11.534 Weighted Round Robin: Not Supported 00:23:11.534 Vendor Specific: Not Supported 00:23:11.534 Reset Timeout: 15000 ms 00:23:11.534 Doorbell Stride: 4 bytes 00:23:11.534 NVM Subsystem Reset: Not Supported 00:23:11.534 Command Sets Supported 00:23:11.534 NVM Command Set: Supported 00:23:11.534 Boot Partition: Not Supported 00:23:11.534 Memory Page Size Minimum: 4096 bytes 00:23:11.534 Memory Page Size Maximum: 4096 bytes 00:23:11.534 Persistent Memory Region: Not Supported 00:23:11.534 Optional Asynchronous Events Supported 00:23:11.534 Namespace Attribute Notices: Supported 00:23:11.534 Firmware Activation Notices: Not Supported 00:23:11.534 ANA Change Notices: Not Supported 00:23:11.534 PLE Aggregate Log Change Notices: Not Supported 00:23:11.534 LBA Status Info Alert Notices: Not Supported 00:23:11.534 EGE Aggregate Log Change Notices: Not Supported 00:23:11.534 Normal NVM Subsystem Shutdown event: Not Supported 00:23:11.534 Zone Descriptor Change Notices: Not Supported 00:23:11.534 Discovery Log Change Notices: Not Supported 00:23:11.534 Controller Attributes 00:23:11.534 128-bit Host Identifier: Supported 00:23:11.534 Non-Operational Permissive Mode: Not Supported 00:23:11.534 NVM Sets: Not Supported 00:23:11.534 Read Recovery Levels: Not Supported 00:23:11.534 Endurance Groups: Not Supported 00:23:11.534 Predictable Latency Mode: Not Supported 00:23:11.534 Traffic Based Keep ALive: Not Supported 00:23:11.534 Namespace Granularity: Not Supported 00:23:11.534 SQ Associations: Not Supported 00:23:11.534 UUID List: Not Supported 00:23:11.534 Multi-Domain Subsystem: Not Supported 00:23:11.534 Fixed Capacity Management: Not Supported 00:23:11.534 Variable Capacity Management: Not Supported 00:23:11.534 Delete Endurance Group: Not Supported 00:23:11.534 Delete NVM Set: Not Supported 00:23:11.534 Extended LBA Formats Supported: Not Supported 00:23:11.534 Flexible Data Placement Supported: Not Supported 00:23:11.534 00:23:11.534 Controller Memory Buffer Support 00:23:11.534 ================================ 00:23:11.534 Supported: No 00:23:11.534 00:23:11.534 Persistent Memory Region Support 00:23:11.534 ================================ 00:23:11.534 Supported: No 00:23:11.534 00:23:11.534 Admin Command Set Attributes 00:23:11.534 ============================ 00:23:11.534 Security Send/Receive: Not Supported 00:23:11.534 Format NVM: Not Supported 00:23:11.534 Firmware Activate/Download: Not Supported 00:23:11.534 Namespace Management: Not Supported 00:23:11.534 Device Self-Test: Not Supported 00:23:11.534 Directives: Not Supported 00:23:11.534 NVMe-MI: Not Supported 00:23:11.534 Virtualization Management: Not Supported 00:23:11.534 Doorbell Buffer Config: Not Supported 00:23:11.534 Get LBA Status Capability: Not Supported 00:23:11.534 Command & Feature Lockdown Capability: Not Supported 00:23:11.534 Abort Command Limit: 4 00:23:11.534 Async Event Request Limit: 4 00:23:11.534 Number of Firmware Slots: N/A 00:23:11.534 Firmware Slot 1 Read-Only: N/A 00:23:11.534 Firmware Activation Without Reset: N/A 00:23:11.534 Multiple Update Detection Support: N/A 00:23:11.534 Firmware Update Granularity: No Information Provided 00:23:11.534 Per-Namespace SMART Log: No 00:23:11.534 Asymmetric Namespace Access Log Page: Not Supported 00:23:11.534 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:11.534 Command Effects Log Page: Supported 00:23:11.534 Get Log Page Extended Data: Supported 00:23:11.534 Telemetry Log Pages: Not Supported 00:23:11.534 Persistent Event Log Pages: Not Supported 00:23:11.534 Supported Log Pages Log Page: May Support 00:23:11.534 Commands Supported & Effects Log Page: Not Supported 00:23:11.534 Feature Identifiers & Effects Log Page:May Support 00:23:11.534 NVMe-MI Commands & Effects Log Page: May Support 00:23:11.534 Data Area 4 for Telemetry Log: Not Supported 00:23:11.534 Error Log Page Entries Supported: 128 00:23:11.534 Keep Alive: Supported 00:23:11.534 Keep Alive Granularity: 10000 ms 00:23:11.534 00:23:11.534 NVM Command Set Attributes 00:23:11.534 ========================== 00:23:11.534 Submission Queue Entry Size 00:23:11.534 Max: 64 00:23:11.534 Min: 64 00:23:11.534 Completion Queue Entry Size 00:23:11.534 Max: 16 00:23:11.534 Min: 16 00:23:11.534 Number of Namespaces: 32 00:23:11.534 Compare Command: Supported 00:23:11.534 Write Uncorrectable Command: Not Supported 00:23:11.534 Dataset Management Command: Supported 00:23:11.534 Write Zeroes Command: Supported 00:23:11.534 Set Features Save Field: Not Supported 00:23:11.534 Reservations: Supported 00:23:11.534 Timestamp: Not Supported 00:23:11.534 Copy: Supported 00:23:11.534 Volatile Write Cache: Present 00:23:11.534 Atomic Write Unit (Normal): 1 00:23:11.534 Atomic Write Unit (PFail): 1 00:23:11.534 Atomic Compare & Write Unit: 1 00:23:11.534 Fused Compare & Write: Supported 00:23:11.534 Scatter-Gather List 00:23:11.534 SGL Command Set: Supported 00:23:11.534 SGL Keyed: Supported 00:23:11.534 SGL Bit Bucket Descriptor: Not Supported 00:23:11.534 SGL Metadata Pointer: Not Supported 00:23:11.534 Oversized SGL: Not Supported 00:23:11.534 SGL Metadata Address: Not Supported 00:23:11.534 SGL Offset: Supported 00:23:11.534 Transport SGL Data Block: Not Supported 00:23:11.534 Replay Protected Memory Block: Not Supported 00:23:11.534 00:23:11.534 Firmware Slot Information 00:23:11.534 ========================= 00:23:11.534 Active slot: 1 00:23:11.534 Slot 1 Firmware Revision: 25.01 00:23:11.534 00:23:11.534 00:23:11.534 Commands Supported and Effects 00:23:11.534 ============================== 00:23:11.534 Admin Commands 00:23:11.534 -------------- 00:23:11.534 Get Log Page (02h): Supported 00:23:11.534 Identify (06h): Supported 00:23:11.534 Abort (08h): Supported 00:23:11.534 Set Features (09h): Supported 00:23:11.534 Get Features (0Ah): Supported 00:23:11.534 Asynchronous Event Request (0Ch): Supported 00:23:11.534 Keep Alive (18h): Supported 00:23:11.534 I/O Commands 00:23:11.534 ------------ 00:23:11.534 Flush (00h): Supported LBA-Change 00:23:11.534 Write (01h): Supported LBA-Change 00:23:11.534 Read (02h): Supported 00:23:11.534 Compare (05h): Supported 00:23:11.534 Write Zeroes (08h): Supported LBA-Change 00:23:11.534 Dataset Management (09h): Supported LBA-Change 00:23:11.534 Copy (19h): Supported LBA-Change 00:23:11.534 00:23:11.534 Error Log 00:23:11.534 ========= 00:23:11.534 00:23:11.534 Arbitration 00:23:11.534 =========== 00:23:11.534 Arbitration Burst: 1 00:23:11.534 00:23:11.534 Power Management 00:23:11.534 ================ 00:23:11.534 Number of Power States: 1 00:23:11.534 Current Power State: Power State #0 00:23:11.534 Power State #0: 00:23:11.534 Max Power: 0.00 W 00:23:11.534 Non-Operational State: Operational 00:23:11.534 Entry Latency: Not Reported 00:23:11.534 Exit Latency: Not Reported 00:23:11.534 Relative Read Throughput: 0 00:23:11.534 Relative Read Latency: 0 00:23:11.534 Relative Write Throughput: 0 00:23:11.534 Relative Write Latency: 0 00:23:11.534 Idle Power: Not Reported 00:23:11.534 Active Power: Not Reported 00:23:11.534 Non-Operational Permissive Mode: Not Supported 00:23:11.534 00:23:11.534 Health Information 00:23:11.534 ================== 00:23:11.534 Critical Warnings: 00:23:11.534 Available Spare Space: OK 00:23:11.534 Temperature: OK 00:23:11.534 Device Reliability: OK 00:23:11.534 Read Only: No 00:23:11.534 Volatile Memory Backup: OK 00:23:11.534 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:11.534 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:11.534 Available Spare: 0% 00:23:11.534 Available Spare Threshold: 0% 00:23:11.535 Life Percentage Used:[2024-12-11 10:01:20.923519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.535 [2024-12-11 10:01:20.923523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x132b690) 00:23:11.535 [2024-12-11 10:01:20.923529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.535 [2024-12-11 10:01:20.923540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138db80, cid 7, qid 0 00:23:11.535 [2024-12-11 10:01:20.923611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.535 [2024-12-11 10:01:20.923616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.535 [2024-12-11 10:01:20.923619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.535 [2024-12-11 10:01:20.923623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138db80) on tqpair=0x132b690 00:23:11.535 [2024-12-11 10:01:20.923649] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:11.535 [2024-12-11 10:01:20.923659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d100) on tqpair=0x132b690 00:23:11.535 [2024-12-11 10:01:20.923664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.535 [2024-12-11 10:01:20.923668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d280) on tqpair=0x132b690 00:23:11.535 [2024-12-11 10:01:20.923673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.535 [2024-12-11 10:01:20.923677] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d400) on tqpair=0x132b690 00:23:11.535 [2024-12-11 10:01:20.923681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.535 [2024-12-11 10:01:20.923685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d580) on tqpair=0x132b690 00:23:11.535 [2024-12-11 10:01:20.923689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.535 [2024-12-11 10:01:20.923695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.535 [2024-12-11 10:01:20.923699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.535 [2024-12-11 10:01:20.923702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b690) 00:23:11.535 [2024-12-11 10:01:20.923707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.535 [2024-12-11 10:01:20.923719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d580, cid 3, qid 0 00:23:11.535 [2024-12-11 10:01:20.923809] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.535 [2024-12-11 10:01:20.923815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.535 [2024-12-11 10:01:20.923818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.535 [2024-12-11 10:01:20.923821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d580) on tqpair=0x132b690 00:23:11.535 [2024-12-11 10:01:20.923827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.535 [2024-12-11 10:01:20.923830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.535 [2024-12-11 10:01:20.923835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b690) 00:23:11.535 [2024-12-11 10:01:20.923840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.535 [2024-12-11 10:01:20.923852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d580, cid 3, qid 0 00:23:11.535 [2024-12-11 10:01:20.923958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.535 [2024-12-11 10:01:20.923964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.535 [2024-12-11 10:01:20.923967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.535 [2024-12-11 10:01:20.923970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d580) on tqpair=0x132b690 00:23:11.535 [2024-12-11 10:01:20.923974] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:11.535 [2024-12-11 10:01:20.923978] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:11.535 [2024-12-11 10:01:20.923986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.535 [2024-12-11 10:01:20.923990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.535 [2024-12-11 10:01:20.923993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b690) 00:23:11.535 [2024-12-11 10:01:20.923998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.535 [2024-12-11 10:01:20.924008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d580, cid 3, qid 0 00:23:11.535 [2024-12-11 10:01:20.924108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.535 [2024-12-11 10:01:20.924114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.535 [2024-12-11 10:01:20.924117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.535 [2024-12-11 10:01:20.924120] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d580) on tqpair=0x132b690 00:23:11.535 [2024-12-11 10:01:20.924128] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.535 [2024-12-11 10:01:20.924132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.535 [2024-12-11 10:01:20.924135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b690) 00:23:11.535 [2024-12-11 10:01:20.924140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.535 [2024-12-11 10:01:20.924149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d580, cid 3, qid 0 00:23:11.535 [2024-12-11 10:01:20.924215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.535 [2024-12-11 10:01:20.928229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.535 [2024-12-11 10:01:20.928233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.535 [2024-12-11 10:01:20.928236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d580) on tqpair=0x132b690 00:23:11.535 [2024-12-11 10:01:20.928246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.535 [2024-12-11 10:01:20.928249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.535 [2024-12-11 10:01:20.928253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b690) 00:23:11.535 [2024-12-11 10:01:20.928258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.535 [2024-12-11 10:01:20.928269] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138d580, cid 3, qid 0 00:23:11.535 [2024-12-11 10:01:20.928403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.535 [2024-12-11 10:01:20.928409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.535 [2024-12-11 10:01:20.928412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.535 [2024-12-11 10:01:20.928416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138d580) on tqpair=0x132b690 00:23:11.535 [2024-12-11 10:01:20.928424] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:23:11.535 0% 00:23:11.535 Data Units Read: 0 00:23:11.536 Data Units Written: 0 00:23:11.536 Host Read Commands: 0 00:23:11.536 Host Write Commands: 0 00:23:11.536 Controller Busy Time: 0 minutes 00:23:11.536 Power Cycles: 0 00:23:11.536 Power On Hours: 0 hours 00:23:11.536 Unsafe Shutdowns: 0 00:23:11.536 Unrecoverable Media Errors: 0 00:23:11.536 Lifetime Error Log Entries: 0 00:23:11.536 Warning Temperature Time: 0 minutes 00:23:11.536 Critical Temperature Time: 0 minutes 00:23:11.536 00:23:11.536 Number of Queues 00:23:11.536 ================ 00:23:11.536 Number of I/O Submission Queues: 127 00:23:11.536 Number of I/O Completion Queues: 127 00:23:11.536 00:23:11.536 Active Namespaces 00:23:11.536 ================= 00:23:11.536 Namespace ID:1 00:23:11.536 Error Recovery Timeout: Unlimited 00:23:11.536 Command Set Identifier: NVM (00h) 00:23:11.536 Deallocate: Supported 00:23:11.536 Deallocated/Unwritten Error: Not Supported 00:23:11.536 Deallocated Read Value: Unknown 00:23:11.536 Deallocate in Write Zeroes: Not Supported 00:23:11.536 Deallocated Guard Field: 0xFFFF 00:23:11.536 Flush: Supported 00:23:11.536 Reservation: Supported 00:23:11.536 Namespace Sharing Capabilities: Multiple Controllers 00:23:11.536 Size (in LBAs): 131072 (0GiB) 00:23:11.536 Capacity (in LBAs): 131072 (0GiB) 00:23:11.536 Utilization (in LBAs): 131072 (0GiB) 00:23:11.536 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:11.536 EUI64: ABCDEF0123456789 00:23:11.536 UUID: efdc8e19-2978-4aea-ba80-20b295bae548 00:23:11.536 Thin Provisioning: Not Supported 00:23:11.536 Per-NS Atomic Units: Yes 00:23:11.536 Atomic Boundary Size (Normal): 0 00:23:11.536 Atomic Boundary Size (PFail): 0 00:23:11.536 Atomic Boundary Offset: 0 00:23:11.536 Maximum Single Source Range Length: 65535 00:23:11.536 Maximum Copy Length: 65535 00:23:11.536 Maximum Source Range Count: 1 00:23:11.536 NGUID/EUI64 Never Reused: No 00:23:11.536 Namespace Write Protected: No 00:23:11.536 Number of LBA Formats: 1 00:23:11.536 Current LBA Format: LBA Format #00 00:23:11.536 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:11.536 00:23:11.536 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:11.536 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:11.536 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.536 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.536 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.536 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:11.536 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:11.536 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:11.536 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:11.536 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:11.536 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:11.536 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:11.536 10:01:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:11.536 rmmod nvme_tcp 00:23:11.536 rmmod nvme_fabrics 00:23:11.536 rmmod nvme_keyring 00:23:11.536 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:11.536 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:11.536 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:11.536 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 166393 ']' 00:23:11.536 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 166393 00:23:11.536 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 166393 ']' 00:23:11.536 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 166393 00:23:11.536 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:11.536 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.536 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 166393 00:23:11.536 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:11.536 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:11.536 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 166393' 00:23:11.536 killing process with pid 166393 00:23:11.536 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 166393 00:23:11.536 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 166393 00:23:11.795 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:11.795 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:11.795 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:11.795 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:11.795 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:11.795 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:11.795 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:11.795 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:11.795 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:11.795 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.795 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.795 10:01:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:14.331 00:23:14.331 real 0m10.701s 00:23:14.331 user 0m8.064s 00:23:14.331 sys 0m5.439s 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:14.331 ************************************ 00:23:14.331 END TEST nvmf_identify 00:23:14.331 ************************************ 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.331 ************************************ 00:23:14.331 START TEST nvmf_perf 00:23:14.331 ************************************ 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:14.331 * Looking for test storage... 00:23:14.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:14.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.331 --rc genhtml_branch_coverage=1 00:23:14.331 --rc genhtml_function_coverage=1 00:23:14.331 --rc genhtml_legend=1 00:23:14.331 --rc geninfo_all_blocks=1 00:23:14.331 --rc geninfo_unexecuted_blocks=1 00:23:14.331 00:23:14.331 ' 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:14.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.331 --rc genhtml_branch_coverage=1 00:23:14.331 --rc genhtml_function_coverage=1 00:23:14.331 --rc genhtml_legend=1 00:23:14.331 --rc geninfo_all_blocks=1 00:23:14.331 --rc geninfo_unexecuted_blocks=1 00:23:14.331 00:23:14.331 ' 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:14.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.331 --rc genhtml_branch_coverage=1 00:23:14.331 --rc genhtml_function_coverage=1 00:23:14.331 --rc genhtml_legend=1 00:23:14.331 --rc geninfo_all_blocks=1 00:23:14.331 --rc geninfo_unexecuted_blocks=1 00:23:14.331 00:23:14.331 ' 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:14.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.331 --rc genhtml_branch_coverage=1 00:23:14.331 --rc genhtml_function_coverage=1 00:23:14.331 --rc genhtml_legend=1 00:23:14.331 --rc geninfo_all_blocks=1 00:23:14.331 --rc geninfo_unexecuted_blocks=1 00:23:14.331 00:23:14.331 ' 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:14.331 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:14.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:14.332 10:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.913 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:20.914 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:20.914 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:20.914 Found net devices under 0000:af:00.0: cvl_0_0 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:20.914 Found net devices under 0000:af:00.1: cvl_0_1 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:20.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:23:20.914 00:23:20.914 --- 10.0.0.2 ping statistics --- 00:23:20.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.914 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:23:20.914 00:23:20.914 --- 10.0.0.1 ping statistics --- 00:23:20.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.914 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=170538 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 170538 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 170538 ']' 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.914 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:20.914 [2024-12-11 10:01:30.380976] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:23:20.914 [2024-12-11 10:01:30.381023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.914 [2024-12-11 10:01:30.463198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:21.173 [2024-12-11 10:01:30.504744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.173 [2024-12-11 10:01:30.504780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.173 [2024-12-11 10:01:30.504787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.173 [2024-12-11 10:01:30.504793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.173 [2024-12-11 10:01:30.504798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.173 [2024-12-11 10:01:30.506330] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.173 [2024-12-11 10:01:30.506440] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.173 [2024-12-11 10:01:30.506546] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.173 [2024-12-11 10:01:30.506547] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:23:21.740 10:01:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.740 10:01:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:21.740 10:01:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:21.740 10:01:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.740 10:01:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:21.740 10:01:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.740 10:01:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:21.740 10:01:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:25.123 10:01:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:25.123 10:01:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:25.123 10:01:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:25.123 10:01:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:25.398 10:01:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:25.398 10:01:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:25.399 10:01:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:25.399 10:01:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:25.399 10:01:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:25.399 [2024-12-11 10:01:34.884187] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.399 10:01:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:25.663 10:01:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:25.663 10:01:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:25.924 10:01:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:25.924 10:01:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:26.183 10:01:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:26.183 [2024-12-11 10:01:35.703264] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.183 10:01:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:26.442 10:01:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:26.442 10:01:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:26.442 10:01:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:26.442 10:01:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:27.820 Initializing NVMe Controllers 00:23:27.820 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:27.820 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:27.820 Initialization complete. Launching workers. 00:23:27.820 ======================================================== 00:23:27.820 Latency(us) 00:23:27.820 Device Information : IOPS MiB/s Average min max 00:23:27.820 PCIE (0000:5e:00.0) NSID 1 from core 0: 98577.27 385.07 323.96 29.04 4556.32 00:23:27.820 ======================================================== 00:23:27.820 Total : 98577.27 385.07 323.96 29.04 4556.32 00:23:27.820 00:23:27.820 10:01:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:29.197 Initializing NVMe Controllers 00:23:29.197 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:29.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:29.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:29.197 Initialization complete. Launching workers. 00:23:29.197 ======================================================== 00:23:29.197 Latency(us) 00:23:29.197 Device Information : IOPS MiB/s Average min max 00:23:29.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 122.00 0.48 8431.94 101.59 44945.22 00:23:29.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.00 0.18 21842.59 7961.22 47886.04 00:23:29.197 ======================================================== 00:23:29.197 Total : 168.00 0.66 12103.90 101.59 47886.04 00:23:29.197 00:23:29.197 10:01:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:30.572 Initializing NVMe Controllers 00:23:30.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:30.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:30.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:30.572 Initialization complete. Launching workers. 00:23:30.572 ======================================================== 00:23:30.572 Latency(us) 00:23:30.572 Device Information : IOPS MiB/s Average min max 00:23:30.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11290.99 44.11 2833.12 384.43 6952.96 00:23:30.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3843.00 15.01 8360.84 5930.82 15773.33 00:23:30.572 ======================================================== 00:23:30.572 Total : 15133.98 59.12 4236.79 384.43 15773.33 00:23:30.572 00:23:30.831 10:01:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:30.831 10:01:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:30.831 10:01:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:33.367 Initializing NVMe Controllers 00:23:33.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:33.367 Controller IO queue size 128, less than required. 00:23:33.367 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:33.367 Controller IO queue size 128, less than required. 00:23:33.367 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:33.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:33.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:33.367 Initialization complete. Launching workers. 00:23:33.367 ======================================================== 00:23:33.367 Latency(us) 00:23:33.367 Device Information : IOPS MiB/s Average min max 00:23:33.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1858.88 464.72 70747.59 47746.05 125900.01 00:23:33.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 570.85 142.71 231203.31 87748.73 354862.65 00:23:33.367 ======================================================== 00:23:33.367 Total : 2429.72 607.43 108445.62 47746.05 354862.65 00:23:33.367 00:23:33.367 10:01:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:33.367 No valid NVMe controllers or AIO or URING devices found 00:23:33.367 Initializing NVMe Controllers 00:23:33.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:33.367 Controller IO queue size 128, less than required. 00:23:33.367 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:33.367 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:33.367 Controller IO queue size 128, less than required. 00:23:33.367 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:33.367 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:33.367 WARNING: Some requested NVMe devices were skipped 00:23:33.367 10:01:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:35.902 Initializing NVMe Controllers 00:23:35.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:35.902 Controller IO queue size 128, less than required. 00:23:35.902 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:35.902 Controller IO queue size 128, less than required. 00:23:35.902 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:35.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:35.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:35.902 Initialization complete. Launching workers. 00:23:35.902 00:23:35.902 ==================== 00:23:35.902 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:35.902 TCP transport: 00:23:35.902 polls: 11955 00:23:35.902 idle_polls: 8234 00:23:35.902 sock_completions: 3721 00:23:35.902 nvme_completions: 6245 00:23:35.902 submitted_requests: 9362 00:23:35.902 queued_requests: 1 00:23:35.902 00:23:35.902 ==================== 00:23:35.902 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:35.902 TCP transport: 00:23:35.902 polls: 11406 00:23:35.902 idle_polls: 7533 00:23:35.902 sock_completions: 3873 00:23:35.902 nvme_completions: 6733 00:23:35.902 submitted_requests: 10058 00:23:35.902 queued_requests: 1 00:23:35.902 ======================================================== 00:23:35.902 Latency(us) 00:23:35.902 Device Information : IOPS MiB/s Average min max 00:23:35.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1560.79 390.20 85318.14 41258.66 150393.26 00:23:35.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1682.77 420.69 76576.01 46990.58 107259.55 00:23:35.902 ======================================================== 00:23:35.902 Total : 3243.56 810.89 80782.69 41258.66 150393.26 00:23:35.902 00:23:35.902 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:35.902 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:35.902 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:35.902 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:35.902 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:35.902 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:35.902 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:36.161 rmmod nvme_tcp 00:23:36.161 rmmod nvme_fabrics 00:23:36.161 rmmod nvme_keyring 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 170538 ']' 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 170538 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 170538 ']' 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 170538 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 170538 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 170538' 00:23:36.161 killing process with pid 170538 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 170538 00:23:36.161 10:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 170538 00:23:37.537 10:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:37.537 10:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:37.537 10:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:37.537 10:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:37.537 10:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:37.537 10:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:37.538 10:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:37.538 10:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:37.538 10:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:37.538 10:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.538 10:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:37.538 10:01:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:40.074 00:23:40.074 real 0m25.714s 00:23:40.074 user 1m6.165s 00:23:40.074 sys 0m8.914s 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:40.074 ************************************ 00:23:40.074 END TEST nvmf_perf 00:23:40.074 ************************************ 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.074 ************************************ 00:23:40.074 START TEST nvmf_fio_host 00:23:40.074 ************************************ 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:40.074 * Looking for test storage... 00:23:40.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:40.074 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:40.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.075 --rc genhtml_branch_coverage=1 00:23:40.075 --rc genhtml_function_coverage=1 00:23:40.075 --rc genhtml_legend=1 00:23:40.075 --rc geninfo_all_blocks=1 00:23:40.075 --rc geninfo_unexecuted_blocks=1 00:23:40.075 00:23:40.075 ' 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:40.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.075 --rc genhtml_branch_coverage=1 00:23:40.075 --rc genhtml_function_coverage=1 00:23:40.075 --rc genhtml_legend=1 00:23:40.075 --rc geninfo_all_blocks=1 00:23:40.075 --rc geninfo_unexecuted_blocks=1 00:23:40.075 00:23:40.075 ' 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:40.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.075 --rc genhtml_branch_coverage=1 00:23:40.075 --rc genhtml_function_coverage=1 00:23:40.075 --rc genhtml_legend=1 00:23:40.075 --rc geninfo_all_blocks=1 00:23:40.075 --rc geninfo_unexecuted_blocks=1 00:23:40.075 00:23:40.075 ' 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:40.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.075 --rc genhtml_branch_coverage=1 00:23:40.075 --rc genhtml_function_coverage=1 00:23:40.075 --rc genhtml_legend=1 00:23:40.075 --rc geninfo_all_blocks=1 00:23:40.075 --rc geninfo_unexecuted_blocks=1 00:23:40.075 00:23:40.075 ' 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.075 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:40.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:40.076 10:01:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.645 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.645 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:46.645 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:46.645 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:46.645 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:46.645 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:46.645 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:46.645 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:46.645 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:46.645 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:46.646 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:46.646 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:46.646 Found net devices under 0000:af:00.0: cvl_0_0 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:46.646 Found net devices under 0000:af:00.1: cvl_0_1 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.646 10:01:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:46.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:23:46.646 00:23:46.646 --- 10.0.0.2 ping statistics --- 00:23:46.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.646 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:23:46.646 00:23:46.646 --- 10.0.0.1 ping statistics --- 00:23:46.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.646 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=177092 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 177092 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 177092 ']' 00:23:46.646 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.647 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.647 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.647 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.647 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.906 [2024-12-11 10:01:56.238725] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:23:46.906 [2024-12-11 10:01:56.238767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.906 [2024-12-11 10:01:56.321656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:46.906 [2024-12-11 10:01:56.362760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.906 [2024-12-11 10:01:56.362796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.906 [2024-12-11 10:01:56.362803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.906 [2024-12-11 10:01:56.362809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.906 [2024-12-11 10:01:56.362814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.906 [2024-12-11 10:01:56.364362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.906 [2024-12-11 10:01:56.364393] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.906 [2024-12-11 10:01:56.364499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.906 [2024-12-11 10:01:56.364500] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:23:46.906 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.906 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:46.906 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:47.165 [2024-12-11 10:01:56.637541] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.165 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:47.165 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:47.165 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.165 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:47.424 Malloc1 00:23:47.424 10:01:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:47.682 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:47.941 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:47.941 [2024-12-11 10:01:57.484423] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.941 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:48.200 10:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:48.459 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:48.459 fio-3.35 00:23:48.459 Starting 1 thread 00:23:50.992 00:23:50.992 test: (groupid=0, jobs=1): err= 0: pid=177665: Wed Dec 11 10:02:00 2024 00:23:50.992 read: IOPS=11.8k, BW=46.2MiB/s (48.4MB/s)(92.6MiB/2005msec) 00:23:50.992 slat (nsec): min=1527, max=246443, avg=1697.03, stdev=2205.60 00:23:50.992 clat (usec): min=3050, max=10932, avg=5965.70, stdev=488.09 00:23:50.992 lat (usec): min=3087, max=10934, avg=5967.40, stdev=488.03 00:23:50.992 clat percentiles (usec): 00:23:50.992 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:23:50.992 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:23:50.992 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6718], 00:23:50.992 | 99.00th=[ 7111], 99.50th=[ 7832], 99.90th=[ 8848], 99.95th=[ 9241], 00:23:50.992 | 99.99th=[10421] 00:23:50.992 bw ( KiB/s): min=46512, max=47784, per=99.98%, avg=47298.00, stdev=571.25, samples=4 00:23:50.992 iops : min=11628, max=11946, avg=11824.50, stdev=142.81, samples=4 00:23:50.992 write: IOPS=11.8k, BW=46.0MiB/s (48.2MB/s)(92.2MiB/2005msec); 0 zone resets 00:23:50.992 slat (nsec): min=1559, max=224168, avg=1764.48, stdev=1737.85 00:23:50.992 clat (usec): min=2404, max=9073, avg=4848.79, stdev=417.55 00:23:50.992 lat (usec): min=2419, max=9075, avg=4850.55, stdev=417.57 00:23:50.992 clat percentiles (usec): 00:23:50.992 | 1.00th=[ 3982], 5.00th=[ 4228], 10.00th=[ 4424], 20.00th=[ 4555], 00:23:50.992 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:23:50.992 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5473], 00:23:50.992 | 99.00th=[ 5932], 99.50th=[ 6849], 99.90th=[ 7832], 99.95th=[ 8225], 00:23:50.992 | 99.99th=[ 8979] 00:23:50.992 bw ( KiB/s): min=46400, max=47616, per=99.98%, avg=47072.00, stdev=502.96, samples=4 00:23:50.992 iops : min=11600, max=11904, avg=11768.00, stdev=125.74, samples=4 00:23:50.992 lat (msec) : 4=0.61%, 10=99.38%, 20=0.01% 00:23:50.992 cpu : usr=71.96%, sys=27.05%, ctx=84, majf=0, minf=3 00:23:50.992 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:50.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:50.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:50.992 issued rwts: total=23712,23600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:50.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:50.992 00:23:50.992 Run status group 0 (all jobs): 00:23:50.992 READ: bw=46.2MiB/s (48.4MB/s), 46.2MiB/s-46.2MiB/s (48.4MB/s-48.4MB/s), io=92.6MiB (97.1MB), run=2005-2005msec 00:23:50.992 WRITE: bw=46.0MiB/s (48.2MB/s), 46.0MiB/s-46.0MiB/s (48.2MB/s-48.2MB/s), io=92.2MiB (96.7MB), run=2005-2005msec 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:50.992 10:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:51.251 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:51.251 fio-3.35 00:23:51.251 Starting 1 thread 00:23:53.784 00:23:53.784 test: (groupid=0, jobs=1): err= 0: pid=178183: Wed Dec 11 10:02:03 2024 00:23:53.784 read: IOPS=11.1k, BW=174MiB/s (182MB/s)(349MiB/2005msec) 00:23:53.784 slat (nsec): min=2526, max=90425, avg=2974.06, stdev=1305.84 00:23:53.784 clat (usec): min=2004, max=13858, avg=6638.73, stdev=1542.35 00:23:53.784 lat (usec): min=2007, max=13873, avg=6641.71, stdev=1542.50 00:23:53.784 clat percentiles (usec): 00:23:53.784 | 1.00th=[ 3523], 5.00th=[ 4228], 10.00th=[ 4621], 20.00th=[ 5276], 00:23:53.784 | 30.00th=[ 5669], 40.00th=[ 6128], 50.00th=[ 6587], 60.00th=[ 7177], 00:23:53.784 | 70.00th=[ 7504], 80.00th=[ 7898], 90.00th=[ 8586], 95.00th=[ 9241], 00:23:53.784 | 99.00th=[10421], 99.50th=[10945], 99.90th=[12256], 99.95th=[13304], 00:23:53.784 | 99.99th=[13829] 00:23:53.784 bw ( KiB/s): min=75872, max=99840, per=49.98%, avg=89040.00, stdev=10171.25, samples=4 00:23:53.784 iops : min= 4742, max= 6240, avg=5565.00, stdev=635.70, samples=4 00:23:53.784 write: IOPS=6543, BW=102MiB/s (107MB/s)(183MiB/1785msec); 0 zone resets 00:23:53.784 slat (usec): min=28, max=381, avg=31.81, stdev= 7.69 00:23:53.784 clat (usec): min=3777, max=15223, avg=8542.45, stdev=1499.11 00:23:53.784 lat (usec): min=3808, max=15335, avg=8574.26, stdev=1500.86 00:23:53.784 clat percentiles (usec): 00:23:53.784 | 1.00th=[ 5800], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7242], 00:23:53.784 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717], 00:23:53.784 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11207], 00:23:53.784 | 99.00th=[12518], 99.50th=[12911], 99.90th=[14746], 99.95th=[14877], 00:23:53.784 | 99.99th=[15139] 00:23:53.784 bw ( KiB/s): min=79040, max=104128, per=88.81%, avg=92984.00, stdev=10635.12, samples=4 00:23:53.784 iops : min= 4940, max= 6508, avg=5811.50, stdev=664.70, samples=4 00:23:53.784 lat (msec) : 4=2.00%, 10=91.08%, 20=6.93% 00:23:53.784 cpu : usr=85.43%, sys=13.82%, ctx=48, majf=0, minf=3 00:23:53.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:53.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:53.784 issued rwts: total=22325,11681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:53.784 00:23:53.784 Run status group 0 (all jobs): 00:23:53.784 READ: bw=174MiB/s (182MB/s), 174MiB/s-174MiB/s (182MB/s-182MB/s), io=349MiB (366MB), run=2005-2005msec 00:23:53.784 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=183MiB (191MB), run=1785-1785msec 00:23:53.784 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:53.784 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:53.784 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:53.784 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:53.784 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:53.784 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:53.784 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:53.784 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:53.784 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:53.784 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:53.784 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:53.784 rmmod nvme_tcp 00:23:53.784 rmmod nvme_fabrics 00:23:53.784 rmmod nvme_keyring 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 177092 ']' 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 177092 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 177092 ']' 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 177092 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 177092 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 177092' 00:23:54.043 killing process with pid 177092 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 177092 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 177092 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:54.043 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:54.302 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:54.302 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:54.302 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.302 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.302 10:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.206 10:02:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:56.206 00:23:56.206 real 0m16.485s 00:23:56.206 user 0m45.785s 00:23:56.206 sys 0m7.135s 00:23:56.206 10:02:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:56.206 10:02:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.206 ************************************ 00:23:56.206 END TEST nvmf_fio_host 00:23:56.206 ************************************ 00:23:56.206 10:02:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:56.206 10:02:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:56.206 10:02:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:56.206 10:02:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.206 ************************************ 00:23:56.206 START TEST nvmf_failover 00:23:56.206 ************************************ 00:23:56.206 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:56.465 * Looking for test storage... 00:23:56.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:56.465 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:56.465 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:23:56.465 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:56.465 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:56.465 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:56.465 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:56.465 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:56.465 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:56.465 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:56.465 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:56.465 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:56.465 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:56.465 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:56.465 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:56.465 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:56.465 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:56.465 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:56.465 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:56.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.466 --rc genhtml_branch_coverage=1 00:23:56.466 --rc genhtml_function_coverage=1 00:23:56.466 --rc genhtml_legend=1 00:23:56.466 --rc geninfo_all_blocks=1 00:23:56.466 --rc geninfo_unexecuted_blocks=1 00:23:56.466 00:23:56.466 ' 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:56.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.466 --rc genhtml_branch_coverage=1 00:23:56.466 --rc genhtml_function_coverage=1 00:23:56.466 --rc genhtml_legend=1 00:23:56.466 --rc geninfo_all_blocks=1 00:23:56.466 --rc geninfo_unexecuted_blocks=1 00:23:56.466 00:23:56.466 ' 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:56.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.466 --rc genhtml_branch_coverage=1 00:23:56.466 --rc genhtml_function_coverage=1 00:23:56.466 --rc genhtml_legend=1 00:23:56.466 --rc geninfo_all_blocks=1 00:23:56.466 --rc geninfo_unexecuted_blocks=1 00:23:56.466 00:23:56.466 ' 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:56.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.466 --rc genhtml_branch_coverage=1 00:23:56.466 --rc genhtml_function_coverage=1 00:23:56.466 --rc genhtml_legend=1 00:23:56.466 --rc geninfo_all_blocks=1 00:23:56.466 --rc geninfo_unexecuted_blocks=1 00:23:56.466 00:23:56.466 ' 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:56.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:56.466 10:02:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:03.044 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:03.044 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:03.044 Found net devices under 0000:af:00.0: cvl_0_0 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:03.044 Found net devices under 0000:af:00.1: cvl_0_1 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.044 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.045 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.045 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.045 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:03.045 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.045 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.045 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:03.045 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:03.045 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.045 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.045 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:03.045 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:03.045 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.045 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.045 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.045 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.045 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:03.045 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:24:03.303 00:24:03.303 --- 10.0.0.2 ping statistics --- 00:24:03.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.303 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:24:03.303 00:24:03.303 --- 10.0.0.1 ping statistics --- 00:24:03.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.303 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=182477 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 182477 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 182477 ']' 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.303 10:02:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:03.303 [2024-12-11 10:02:12.761789] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:24:03.303 [2024-12-11 10:02:12.761832] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.303 [2024-12-11 10:02:12.845873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:03.562 [2024-12-11 10:02:12.886333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.562 [2024-12-11 10:02:12.886366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.562 [2024-12-11 10:02:12.886373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.562 [2024-12-11 10:02:12.886379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.562 [2024-12-11 10:02:12.886384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.562 [2024-12-11 10:02:12.887808] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.562 [2024-12-11 10:02:12.887913] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.562 [2024-12-11 10:02:12.887915] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:24:04.129 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.129 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:04.129 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:04.129 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:04.129 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:04.129 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.129 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:04.388 [2024-12-11 10:02:13.818062] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.388 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:04.646 Malloc0 00:24:04.646 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:04.905 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:04.905 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.163 [2024-12-11 10:02:14.614081] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.163 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:05.422 [2024-12-11 10:02:14.798633] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:05.422 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:05.422 [2024-12-11 10:02:14.991294] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:05.686 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=182779 00:24:05.686 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:05.686 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:05.686 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 182779 /var/tmp/bdevperf.sock 00:24:05.686 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 182779 ']' 00:24:05.686 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:05.686 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.686 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:05.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:05.686 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.686 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:05.945 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.945 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:05.945 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:06.204 NVMe0n1 00:24:06.204 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:06.462 00:24:06.723 10:02:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:06.723 10:02:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=182964 00:24:06.723 10:02:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:07.773 10:02:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:07.773 [2024-12-11 10:02:17.238627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.773 [2024-12-11 10:02:17.238971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.238977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.238982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.238988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.238993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.238999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 [2024-12-11 10:02:17.239131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb88b0 is same with the state(6) to be set 00:24:07.774 10:02:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:11.061 10:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:11.320 00:24:11.320 10:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:11.578 10:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:14.867 10:02:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.867 [2024-12-11 10:02:24.085702] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.867 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:15.804 10:02:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:15.804 [2024-12-11 10:02:25.301597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.804 [2024-12-11 10:02:25.301631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.804 [2024-12-11 10:02:25.301640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.804 [2024-12-11 10:02:25.301647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.804 [2024-12-11 10:02:25.301653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.804 [2024-12-11 10:02:25.301659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.804 [2024-12-11 10:02:25.301664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.804 [2024-12-11 10:02:25.301670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.804 [2024-12-11 10:02:25.301676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.804 [2024-12-11 10:02:25.301682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.805 [2024-12-11 10:02:25.301695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.805 [2024-12-11 10:02:25.301701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.805 [2024-12-11 10:02:25.301707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.805 [2024-12-11 10:02:25.301712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.805 [2024-12-11 10:02:25.301718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.805 [2024-12-11 10:02:25.301725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.805 [2024-12-11 10:02:25.301730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.805 [2024-12-11 10:02:25.301736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.805 [2024-12-11 10:02:25.301742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.805 [2024-12-11 10:02:25.301747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.805 [2024-12-11 10:02:25.301758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.805 [2024-12-11 10:02:25.301764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.805 [2024-12-11 10:02:25.301770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.805 [2024-12-11 10:02:25.301776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.805 [2024-12-11 10:02:25.301782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.805 [2024-12-11 10:02:25.301788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05a30 is same with the state(6) to be set 00:24:15.805 10:02:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 182964 00:24:22.380 { 00:24:22.380 "results": [ 00:24:22.380 { 00:24:22.380 "job": "NVMe0n1", 00:24:22.380 "core_mask": "0x1", 00:24:22.380 "workload": "verify", 00:24:22.380 "status": "finished", 00:24:22.380 "verify_range": { 00:24:22.380 "start": 0, 00:24:22.380 "length": 16384 00:24:22.380 }, 00:24:22.380 "queue_depth": 128, 00:24:22.380 "io_size": 4096, 00:24:22.380 "runtime": 15.00343, 00:24:22.380 "iops": 11324.01057624823, 00:24:22.380 "mibps": 44.23441631346965, 00:24:22.380 "io_failed": 6149, 00:24:22.380 "io_timeout": 0, 00:24:22.380 "avg_latency_us": 10886.836571497815, 00:24:22.380 "min_latency_us": 407.6495238095238, 00:24:22.380 "max_latency_us": 23592.96 00:24:22.380 } 00:24:22.380 ], 00:24:22.380 "core_count": 1 00:24:22.380 } 00:24:22.380 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 182779 00:24:22.380 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 182779 ']' 00:24:22.380 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 182779 00:24:22.380 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:22.380 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.380 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 182779 00:24:22.380 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:22.380 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:22.380 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 182779' 00:24:22.380 killing process with pid 182779 00:24:22.380 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 182779 00:24:22.380 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 182779 00:24:22.380 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:22.380 [2024-12-11 10:02:15.067580] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:24:22.380 [2024-12-11 10:02:15.067638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182779 ] 00:24:22.380 [2024-12-11 10:02:15.152607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.380 [2024-12-11 10:02:15.192412] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.380 Running I/O for 15 seconds... 00:24:22.380 11275.00 IOPS, 44.04 MiB/s [2024-12-11T09:02:31.955Z] [2024-12-11 10:02:17.239479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.380 [2024-12-11 10:02:17.239834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.380 [2024-12-11 10:02:17.239849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.380 [2024-12-11 10:02:17.239856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.380 [2024-12-11 10:02:17.239868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.239876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.239882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.239890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.239897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.239905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.239911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.239919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.239926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.239934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.239941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.239948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.239955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.239963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.239969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.239977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.239984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.239992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.239998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-12-11 10:02:17.240156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.381 [2024-12-11 10:02:17.240401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.381 [2024-12-11 10:02:17.240407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.382 [2024-12-11 10:02:17.240942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.382 [2024-12-11 10:02:17.240950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.383 [2024-12-11 10:02:17.240956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.240965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.383 [2024-12-11 10:02:17.240972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.240980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.383 [2024-12-11 10:02:17.240987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.240995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.383 [2024-12-11 10:02:17.241002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.383 [2024-12-11 10:02:17.241016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.383 [2024-12-11 10:02:17.241030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.383 [2024-12-11 10:02:17.241044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.383 [2024-12-11 10:02:17.241058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.383 [2024-12-11 10:02:17.241072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.383 [2024-12-11 10:02:17.241099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101008 len:8 PRP1 0x0 PRP2 0x0 00:24:22.383 [2024-12-11 10:02:17.241106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.383 [2024-12-11 10:02:17.241120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.383 [2024-12-11 10:02:17.241126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101016 len:8 PRP1 0x0 PRP2 0x0 00:24:22.383 [2024-12-11 10:02:17.241132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.383 [2024-12-11 10:02:17.241144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.383 [2024-12-11 10:02:17.241150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101024 len:8 PRP1 0x0 PRP2 0x0 00:24:22.383 [2024-12-11 10:02:17.241156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.383 [2024-12-11 10:02:17.241168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.383 [2024-12-11 10:02:17.241174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101032 len:8 PRP1 0x0 PRP2 0x0 00:24:22.383 [2024-12-11 10:02:17.241180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.383 [2024-12-11 10:02:17.241191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.383 [2024-12-11 10:02:17.241196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101040 len:8 PRP1 0x0 PRP2 0x0 00:24:22.383 [2024-12-11 10:02:17.241202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.383 [2024-12-11 10:02:17.241215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.383 [2024-12-11 10:02:17.241224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101048 len:8 PRP1 0x0 PRP2 0x0 00:24:22.383 [2024-12-11 10:02:17.241230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.383 [2024-12-11 10:02:17.241241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.383 [2024-12-11 10:02:17.241246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101056 len:8 PRP1 0x0 PRP2 0x0 00:24:22.383 [2024-12-11 10:02:17.241252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.383 [2024-12-11 10:02:17.241263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.383 [2024-12-11 10:02:17.241268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101064 len:8 PRP1 0x0 PRP2 0x0 00:24:22.383 [2024-12-11 10:02:17.241274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.383 [2024-12-11 10:02:17.241285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.383 [2024-12-11 10:02:17.241291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101072 len:8 PRP1 0x0 PRP2 0x0 00:24:22.383 [2024-12-11 10:02:17.241297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.383 [2024-12-11 10:02:17.241308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.383 [2024-12-11 10:02:17.241313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101080 len:8 PRP1 0x0 PRP2 0x0 00:24:22.383 [2024-12-11 10:02:17.241319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.383 [2024-12-11 10:02:17.241330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.383 [2024-12-11 10:02:17.241335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101088 len:8 PRP1 0x0 PRP2 0x0 00:24:22.383 [2024-12-11 10:02:17.241343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.383 [2024-12-11 10:02:17.241355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.383 [2024-12-11 10:02:17.241360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101096 len:8 PRP1 0x0 PRP2 0x0 00:24:22.383 [2024-12-11 10:02:17.241366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.383 [2024-12-11 10:02:17.241377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.383 [2024-12-11 10:02:17.241382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101104 len:8 PRP1 0x0 PRP2 0x0 00:24:22.383 [2024-12-11 10:02:17.241388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.383 [2024-12-11 10:02:17.241403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.383 [2024-12-11 10:02:17.241408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101112 len:8 PRP1 0x0 PRP2 0x0 00:24:22.383 [2024-12-11 10:02:17.241415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.383 [2024-12-11 10:02:17.241426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.383 [2024-12-11 10:02:17.241431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100280 len:8 PRP1 0x0 PRP2 0x0 00:24:22.383 [2024-12-11 10:02:17.241437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.383 [2024-12-11 10:02:17.241449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.383 [2024-12-11 10:02:17.241454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100288 len:8 PRP1 0x0 PRP2 0x0 00:24:22.383 [2024-12-11 10:02:17.241460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.383 [2024-12-11 10:02:17.241466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.383 [2024-12-11 10:02:17.241471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.383 [2024-12-11 10:02:17.254841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100296 len:8 PRP1 0x0 PRP2 0x0 00:24:22.383 [2024-12-11 10:02:17.254852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:17.254860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.384 [2024-12-11 10:02:17.254866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.384 [2024-12-11 10:02:17.254871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100304 len:8 PRP1 0x0 PRP2 0x0 00:24:22.384 [2024-12-11 10:02:17.254878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:17.254885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.384 [2024-12-11 10:02:17.254890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.384 [2024-12-11 10:02:17.254898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100312 len:8 PRP1 0x0 PRP2 0x0 00:24:22.384 [2024-12-11 10:02:17.254904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:17.254911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.384 [2024-12-11 10:02:17.254916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.384 [2024-12-11 10:02:17.254921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100320 len:8 PRP1 0x0 PRP2 0x0 00:24:22.384 [2024-12-11 10:02:17.254927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:17.254934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.384 [2024-12-11 10:02:17.254940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.384 [2024-12-11 10:02:17.254945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100328 len:8 PRP1 0x0 PRP2 0x0 00:24:22.384 [2024-12-11 10:02:17.254951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:17.254997] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:22.384 [2024-12-11 10:02:17.255019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.384 [2024-12-11 10:02:17.255026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:17.255034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.384 [2024-12-11 10:02:17.255041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:17.255048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.384 [2024-12-11 10:02:17.255054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:17.255061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.384 [2024-12-11 10:02:17.255068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:17.255075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:22.384 [2024-12-11 10:02:17.255103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad1930 (9): Bad file descriptor 00:24:22.384 [2024-12-11 10:02:17.258633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:22.384 [2024-12-11 10:02:17.323935] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:22.384 10965.50 IOPS, 42.83 MiB/s [2024-12-11T09:02:31.959Z] 11154.00 IOPS, 43.57 MiB/s [2024-12-11T09:02:31.959Z] 11171.00 IOPS, 43.64 MiB/s [2024-12-11T09:02:31.959Z] [2024-12-11 10:02:20.877115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.384 [2024-12-11 10:02:20.877156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:20.877171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.384 [2024-12-11 10:02:20.877179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:20.877193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.384 [2024-12-11 10:02:20.877200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:20.877208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.384 [2024-12-11 10:02:20.877215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:20.877230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.384 [2024-12-11 10:02:20.877237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:20.877245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.384 [2024-12-11 10:02:20.877252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:20.877260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.384 [2024-12-11 10:02:20.877266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:20.877274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.384 [2024-12-11 10:02:20.877281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:20.877289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.384 [2024-12-11 10:02:20.877295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:20.877304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.384 [2024-12-11 10:02:20.877310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:20.877318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.384 [2024-12-11 10:02:20.877324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:20.877332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.384 [2024-12-11 10:02:20.877339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:20.877347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.384 [2024-12-11 10:02:20.877353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:20.877361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.384 [2024-12-11 10:02:20.877367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:20.877375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.384 [2024-12-11 10:02:20.877383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.384 [2024-12-11 10:02:20.877391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.384 [2024-12-11 10:02:20.877398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.385 [2024-12-11 10:02:20.877938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.385 [2024-12-11 10:02:20.877946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.877952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.877960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.877966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.877974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.877981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.877989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.877995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.386 [2024-12-11 10:02:20.878208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.386 [2024-12-11 10:02:20.878227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.386 [2024-12-11 10:02:20.878243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.386 [2024-12-11 10:02:20.878258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.386 [2024-12-11 10:02:20.878272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.386 [2024-12-11 10:02:20.878286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.386 [2024-12-11 10:02:20.878302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.386 [2024-12-11 10:02:20.878480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.386 [2024-12-11 10:02:20.878488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.387 [2024-12-11 10:02:20.878502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.387 [2024-12-11 10:02:20.878516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.387 [2024-12-11 10:02:20.878530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.387 [2024-12-11 10:02:20.878544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.387 [2024-12-11 10:02:20.878559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.387 [2024-12-11 10:02:20.878573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.387 [2024-12-11 10:02:20.878587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.387 [2024-12-11 10:02:20.878601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.387 [2024-12-11 10:02:20.878615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.387 [2024-12-11 10:02:20.878629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.387 [2024-12-11 10:02:20.878644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.387 [2024-12-11 10:02:20.878657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.387 [2024-12-11 10:02:20.878673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.387 [2024-12-11 10:02:20.878687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.387 [2024-12-11 10:02:20.878702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.387 [2024-12-11 10:02:20.878728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57176 len:8 PRP1 0x0 PRP2 0x0 00:24:22.387 [2024-12-11 10:02:20.878734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.387 [2024-12-11 10:02:20.878749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.387 [2024-12-11 10:02:20.878754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57184 len:8 PRP1 0x0 PRP2 0x0 00:24:22.387 [2024-12-11 10:02:20.878761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.387 [2024-12-11 10:02:20.878772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.387 [2024-12-11 10:02:20.878778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57192 len:8 PRP1 0x0 PRP2 0x0 00:24:22.387 [2024-12-11 10:02:20.878784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.387 [2024-12-11 10:02:20.878797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.387 [2024-12-11 10:02:20.878802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57200 len:8 PRP1 0x0 PRP2 0x0 00:24:22.387 [2024-12-11 10:02:20.878809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.387 [2024-12-11 10:02:20.878820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.387 [2024-12-11 10:02:20.878825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57208 len:8 PRP1 0x0 PRP2 0x0 00:24:22.387 [2024-12-11 10:02:20.878831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.387 [2024-12-11 10:02:20.878843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.387 [2024-12-11 10:02:20.878848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57216 len:8 PRP1 0x0 PRP2 0x0 00:24:22.387 [2024-12-11 10:02:20.878854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.387 [2024-12-11 10:02:20.878870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.387 [2024-12-11 10:02:20.878875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56256 len:8 PRP1 0x0 PRP2 0x0 00:24:22.387 [2024-12-11 10:02:20.878881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.387 [2024-12-11 10:02:20.878892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.387 [2024-12-11 10:02:20.878897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56264 len:8 PRP1 0x0 PRP2 0x0 00:24:22.387 [2024-12-11 10:02:20.878904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.387 [2024-12-11 10:02:20.878916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.387 [2024-12-11 10:02:20.878921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56272 len:8 PRP1 0x0 PRP2 0x0 00:24:22.387 [2024-12-11 10:02:20.878927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.387 [2024-12-11 10:02:20.878938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.387 [2024-12-11 10:02:20.878944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56280 len:8 PRP1 0x0 PRP2 0x0 00:24:22.387 [2024-12-11 10:02:20.878950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.387 [2024-12-11 10:02:20.878962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.387 [2024-12-11 10:02:20.878968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56288 len:8 PRP1 0x0 PRP2 0x0 00:24:22.387 [2024-12-11 10:02:20.878974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.878981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.387 [2024-12-11 10:02:20.878986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.387 [2024-12-11 10:02:20.878991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56296 len:8 PRP1 0x0 PRP2 0x0 00:24:22.387 [2024-12-11 10:02:20.878998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.387 [2024-12-11 10:02:20.879004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.387 [2024-12-11 10:02:20.879009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.387 [2024-12-11 10:02:20.879014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56304 len:8 PRP1 0x0 PRP2 0x0 00:24:22.388 [2024-12-11 10:02:20.879020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:20.879027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.388 [2024-12-11 10:02:20.879031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.388 [2024-12-11 10:02:20.879037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56312 len:8 PRP1 0x0 PRP2 0x0 00:24:22.388 [2024-12-11 10:02:20.879044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:20.879051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.388 [2024-12-11 10:02:20.879056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.388 [2024-12-11 10:02:20.879061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56320 len:8 PRP1 0x0 PRP2 0x0 00:24:22.388 [2024-12-11 10:02:20.879067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:20.879073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.388 [2024-12-11 10:02:20.879078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.388 [2024-12-11 10:02:20.879083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56328 len:8 PRP1 0x0 PRP2 0x0 00:24:22.388 [2024-12-11 10:02:20.879089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:20.879097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.388 [2024-12-11 10:02:20.879102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.388 [2024-12-11 10:02:20.879107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56336 len:8 PRP1 0x0 PRP2 0x0 00:24:22.388 [2024-12-11 10:02:20.879113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:20.879120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.388 [2024-12-11 10:02:20.879124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.388 [2024-12-11 10:02:20.879130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56344 len:8 PRP1 0x0 PRP2 0x0 00:24:22.388 [2024-12-11 10:02:20.879135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:20.879142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.388 [2024-12-11 10:02:20.879147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.388 [2024-12-11 10:02:20.879152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56352 len:8 PRP1 0x0 PRP2 0x0 00:24:22.388 [2024-12-11 10:02:20.879158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:20.879166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.388 [2024-12-11 10:02:20.879171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.388 [2024-12-11 10:02:20.879176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56360 len:8 PRP1 0x0 PRP2 0x0 00:24:22.388 [2024-12-11 10:02:20.879182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:20.879189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.388 [2024-12-11 10:02:20.879193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.388 [2024-12-11 10:02:20.879199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56368 len:8 PRP1 0x0 PRP2 0x0 00:24:22.388 [2024-12-11 10:02:20.879205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:20.879211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.388 [2024-12-11 10:02:20.879220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.388 [2024-12-11 10:02:20.879228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56376 len:8 PRP1 0x0 PRP2 0x0 00:24:22.388 [2024-12-11 10:02:20.879234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:20.879275] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:22.388 [2024-12-11 10:02:20.879296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.388 [2024-12-11 10:02:20.879304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:20.879311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.388 [2024-12-11 10:02:20.879317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:20.879324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.388 [2024-12-11 10:02:20.879330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:20.879337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.388 [2024-12-11 10:02:20.879345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:20.879352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:22.388 [2024-12-11 10:02:20.879374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad1930 (9): Bad file descriptor 00:24:22.388 [2024-12-11 10:02:20.893774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:22.388 [2024-12-11 10:02:20.920592] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:22.388 11095.00 IOPS, 43.34 MiB/s [2024-12-11T09:02:31.963Z] 11161.67 IOPS, 43.60 MiB/s [2024-12-11T09:02:31.963Z] 11226.86 IOPS, 43.85 MiB/s [2024-12-11T09:02:31.963Z] 11242.12 IOPS, 43.91 MiB/s [2024-12-11T09:02:31.963Z] 11289.11 IOPS, 44.10 MiB/s [2024-12-11T09:02:31.963Z] [2024-12-11 10:02:25.302281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.388 [2024-12-11 10:02:25.302312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:25.302328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.388 [2024-12-11 10:02:25.302335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:25.302344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.388 [2024-12-11 10:02:25.302351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:25.302359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.388 [2024-12-11 10:02:25.302366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:25.302374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.388 [2024-12-11 10:02:25.302380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:25.302392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.388 [2024-12-11 10:02:25.302399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:25.302407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.388 [2024-12-11 10:02:25.302413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:25.302421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.388 [2024-12-11 10:02:25.302428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:25.302436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.388 [2024-12-11 10:02:25.302442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:25.302450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.388 [2024-12-11 10:02:25.302457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:25.302465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.388 [2024-12-11 10:02:25.302472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:25.302479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.388 [2024-12-11 10:02:25.302486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:25.302494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.388 [2024-12-11 10:02:25.302500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.388 [2024-12-11 10:02:25.302508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.388 [2024-12-11 10:02:25.302514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.389 [2024-12-11 10:02:25.302976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.389 [2024-12-11 10:02:25.302984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.302990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.302998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.390 [2024-12-11 10:02:25.303516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.390 [2024-12-11 10:02:25.303522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.391 [2024-12-11 10:02:25.303915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.391 [2024-12-11 10:02:25.303946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77200 len:8 PRP1 0x0 PRP2 0x0 00:24:22.391 [2024-12-11 10:02:25.303953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.391 [2024-12-11 10:02:25.303968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.391 [2024-12-11 10:02:25.303973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77208 len:8 PRP1 0x0 PRP2 0x0 00:24:22.391 [2024-12-11 10:02:25.303979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.303986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.391 [2024-12-11 10:02:25.303991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.391 [2024-12-11 10:02:25.303996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77216 len:8 PRP1 0x0 PRP2 0x0 00:24:22.391 [2024-12-11 10:02:25.304002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.304009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.391 [2024-12-11 10:02:25.304014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.391 [2024-12-11 10:02:25.304020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77224 len:8 PRP1 0x0 PRP2 0x0 00:24:22.391 [2024-12-11 10:02:25.304026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.304033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.391 [2024-12-11 10:02:25.304038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.391 [2024-12-11 10:02:25.304043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77232 len:8 PRP1 0x0 PRP2 0x0 00:24:22.391 [2024-12-11 10:02:25.304049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.304057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.391 [2024-12-11 10:02:25.304062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.391 [2024-12-11 10:02:25.304068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77240 len:8 PRP1 0x0 PRP2 0x0 00:24:22.391 [2024-12-11 10:02:25.304074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.391 [2024-12-11 10:02:25.304080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.392 [2024-12-11 10:02:25.304085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.392 [2024-12-11 10:02:25.304090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77248 len:8 PRP1 0x0 PRP2 0x0 00:24:22.392 [2024-12-11 10:02:25.304096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.392 [2024-12-11 10:02:25.304103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.392 [2024-12-11 10:02:25.304108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.392 [2024-12-11 10:02:25.304113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77256 len:8 PRP1 0x0 PRP2 0x0 00:24:22.392 [2024-12-11 10:02:25.304119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.392 [2024-12-11 10:02:25.304126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.392 [2024-12-11 10:02:25.304132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.392 [2024-12-11 10:02:25.304137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77264 len:8 PRP1 0x0 PRP2 0x0 00:24:22.392 [2024-12-11 10:02:25.304143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.392 [2024-12-11 10:02:25.304150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.392 [2024-12-11 10:02:25.304156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.392 [2024-12-11 10:02:25.304162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77272 len:8 PRP1 0x0 PRP2 0x0 00:24:22.392 [2024-12-11 10:02:25.304168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.392 [2024-12-11 10:02:25.304174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.392 [2024-12-11 10:02:25.304180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.392 [2024-12-11 10:02:25.304185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76368 len:8 PRP1 0x0 PRP2 0x0 00:24:22.392 [2024-12-11 10:02:25.304191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.392 [2024-12-11 10:02:25.304197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.392 [2024-12-11 10:02:25.304202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.392 [2024-12-11 10:02:25.304207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76376 len:8 PRP1 0x0 PRP2 0x0 00:24:22.392 [2024-12-11 10:02:25.304213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.392 [2024-12-11 10:02:25.304226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.392 [2024-12-11 10:02:25.304231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.392 [2024-12-11 10:02:25.304236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76384 len:8 PRP1 0x0 PRP2 0x0 00:24:22.392 [2024-12-11 10:02:25.304244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.392 [2024-12-11 10:02:25.304250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.392 [2024-12-11 10:02:25.304255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.392 [2024-12-11 10:02:25.304260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76392 len:8 PRP1 0x0 PRP2 0x0 00:24:22.392 [2024-12-11 10:02:25.304266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.392 [2024-12-11 10:02:25.304273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.392 [2024-12-11 10:02:25.304277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.392 [2024-12-11 10:02:25.304283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76400 len:8 PRP1 0x0 PRP2 0x0 00:24:22.392 [2024-12-11 10:02:25.304289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.392 [2024-12-11 10:02:25.304295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.392 [2024-12-11 10:02:25.304300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.392 [2024-12-11 10:02:25.304305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76408 len:8 PRP1 0x0 PRP2 0x0 00:24:22.392 [2024-12-11 10:02:25.304312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.392 [2024-12-11 10:02:25.304318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:22.392 [2024-12-11 10:02:25.304324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:22.392 [2024-12-11 10:02:25.304329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76416 len:8 PRP1 0x0 PRP2 0x0 00:24:22.392 [2024-12-11 10:02:25.315485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.392 [2024-12-11 10:02:25.315533] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:22.392 [2024-12-11 10:02:25.315556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.392 [2024-12-11 10:02:25.315563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.392 [2024-12-11 10:02:25.315571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.392 [2024-12-11 10:02:25.315577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.392 [2024-12-11 10:02:25.315584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.392 [2024-12-11 10:02:25.315591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.392 [2024-12-11 10:02:25.315598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.392 [2024-12-11 10:02:25.315604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.392 [2024-12-11 10:02:25.315611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:22.392 [2024-12-11 10:02:25.315641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad1930 (9): Bad file descriptor 00:24:22.392 [2024-12-11 10:02:25.319329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:22.392 [2024-12-11 10:02:25.349813] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:22.392 11257.30 IOPS, 43.97 MiB/s [2024-12-11T09:02:31.967Z] 11287.91 IOPS, 44.09 MiB/s [2024-12-11T09:02:31.967Z] 11288.42 IOPS, 44.10 MiB/s [2024-12-11T09:02:31.967Z] 11294.31 IOPS, 44.12 MiB/s [2024-12-11T09:02:31.967Z] 11315.07 IOPS, 44.20 MiB/s 00:24:22.392 Latency(us) 00:24:22.392 [2024-12-11T09:02:31.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.392 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:22.392 Verification LBA range: start 0x0 length 0x4000 00:24:22.392 NVMe0n1 : 15.00 11324.01 44.23 409.84 0.00 10886.84 407.65 23592.96 00:24:22.392 [2024-12-11T09:02:31.967Z] =================================================================================================================== 00:24:22.392 [2024-12-11T09:02:31.967Z] Total : 11324.01 44.23 409.84 0.00 10886.84 407.65 23592.96 00:24:22.392 Received shutdown signal, test time was about 15.000000 seconds 00:24:22.392 00:24:22.392 Latency(us) 00:24:22.392 [2024-12-11T09:02:31.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.392 [2024-12-11T09:02:31.967Z] =================================================================================================================== 00:24:22.392 [2024-12-11T09:02:31.967Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.392 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:22.392 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:22.392 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:22.392 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=185455 00:24:22.392 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:22.392 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 185455 /var/tmp/bdevperf.sock 00:24:22.392 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 185455 ']' 00:24:22.392 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.392 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.392 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.392 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.392 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:22.392 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.392 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:22.392 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:22.393 [2024-12-11 10:02:31.846117] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:22.393 10:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:22.651 [2024-12-11 10:02:32.066748] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:22.651 10:02:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:22.910 NVMe0n1 00:24:23.169 10:02:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:23.169 00:24:23.428 10:02:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:23.686 00:24:23.686 10:02:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:23.686 10:02:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:23.945 10:02:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:24.204 10:02:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:27.490 10:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:27.490 10:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:27.490 10:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:27.490 10:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=186371 00:24:27.490 10:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 186371 00:24:28.427 { 00:24:28.427 "results": [ 00:24:28.427 { 00:24:28.427 "job": "NVMe0n1", 00:24:28.427 "core_mask": "0x1", 00:24:28.427 "workload": "verify", 00:24:28.427 "status": "finished", 00:24:28.427 "verify_range": { 00:24:28.427 "start": 0, 00:24:28.427 "length": 16384 00:24:28.427 }, 00:24:28.427 "queue_depth": 128, 00:24:28.427 "io_size": 4096, 00:24:28.427 "runtime": 1.007463, 00:24:28.427 "iops": 11281.803897512862, 00:24:28.427 "mibps": 44.06954647465962, 00:24:28.427 "io_failed": 0, 00:24:28.427 "io_timeout": 0, 00:24:28.427 "avg_latency_us": 11303.400442757431, 00:24:28.427 "min_latency_us": 2402.9866666666667, 00:24:28.427 "max_latency_us": 9299.870476190476 00:24:28.427 } 00:24:28.427 ], 00:24:28.427 "core_count": 1 00:24:28.427 } 00:24:28.427 10:02:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:28.427 [2024-12-11 10:02:31.465054] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:24:28.427 [2024-12-11 10:02:31.465104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185455 ] 00:24:28.427 [2024-12-11 10:02:31.545920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.427 [2024-12-11 10:02:31.582419] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.427 [2024-12-11 10:02:33.564580] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:28.427 [2024-12-11 10:02:33.564625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.427 [2024-12-11 10:02:33.564637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.427 [2024-12-11 10:02:33.564645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.427 [2024-12-11 10:02:33.564652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.427 [2024-12-11 10:02:33.564660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.427 [2024-12-11 10:02:33.564667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.427 [2024-12-11 10:02:33.564674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.427 [2024-12-11 10:02:33.564680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.427 [2024-12-11 10:02:33.564687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:28.427 [2024-12-11 10:02:33.564713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:28.427 [2024-12-11 10:02:33.564727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1441930 (9): Bad file descriptor 00:24:28.427 [2024-12-11 10:02:33.614282] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:28.427 Running I/O for 1 seconds... 00:24:28.427 11238.00 IOPS, 43.90 MiB/s 00:24:28.427 Latency(us) 00:24:28.427 [2024-12-11T09:02:38.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.427 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:28.427 Verification LBA range: start 0x0 length 0x4000 00:24:28.427 NVMe0n1 : 1.01 11281.80 44.07 0.00 0.00 11303.40 2402.99 9299.87 00:24:28.427 [2024-12-11T09:02:38.002Z] =================================================================================================================== 00:24:28.427 [2024-12-11T09:02:38.002Z] Total : 11281.80 44.07 0.00 0.00 11303.40 2402.99 9299.87 00:24:28.427 10:02:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:28.427 10:02:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:28.686 10:02:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:28.945 10:02:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:28.945 10:02:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:28.945 10:02:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:29.204 10:02:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:32.491 10:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:32.491 10:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:32.491 10:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 185455 00:24:32.491 10:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 185455 ']' 00:24:32.491 10:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 185455 00:24:32.491 10:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:32.491 10:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:32.491 10:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 185455 00:24:32.491 10:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:32.491 10:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:32.491 10:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 185455' 00:24:32.491 killing process with pid 185455 00:24:32.491 10:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 185455 00:24:32.491 10:02:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 185455 00:24:32.751 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:32.751 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:32.751 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:32.751 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:32.751 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:32.751 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:32.751 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:32.751 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:32.751 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:32.751 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:32.751 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:33.010 rmmod nvme_tcp 00:24:33.010 rmmod nvme_fabrics 00:24:33.010 rmmod nvme_keyring 00:24:33.010 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:33.010 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:33.010 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:33.010 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 182477 ']' 00:24:33.010 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 182477 00:24:33.010 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 182477 ']' 00:24:33.010 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 182477 00:24:33.010 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:33.010 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.010 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 182477 00:24:33.010 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:33.010 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:33.010 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 182477' 00:24:33.010 killing process with pid 182477 00:24:33.010 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 182477 00:24:33.010 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 182477 00:24:33.269 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:33.269 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:33.269 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:33.269 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:33.269 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:33.269 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:33.269 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:33.269 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:33.269 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:33.269 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.269 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:33.269 10:02:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.175 10:02:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:35.175 00:24:35.175 real 0m38.931s 00:24:35.175 user 2m0.967s 00:24:35.175 sys 0m8.692s 00:24:35.175 10:02:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:35.175 10:02:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:35.175 ************************************ 00:24:35.175 END TEST nvmf_failover 00:24:35.175 ************************************ 00:24:35.175 10:02:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:35.175 10:02:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:35.175 10:02:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:35.175 10:02:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.435 ************************************ 00:24:35.435 START TEST nvmf_host_discovery 00:24:35.435 ************************************ 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:35.435 * Looking for test storage... 00:24:35.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:35.435 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:35.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.436 --rc genhtml_branch_coverage=1 00:24:35.436 --rc genhtml_function_coverage=1 00:24:35.436 --rc genhtml_legend=1 00:24:35.436 --rc geninfo_all_blocks=1 00:24:35.436 --rc geninfo_unexecuted_blocks=1 00:24:35.436 00:24:35.436 ' 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:35.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.436 --rc genhtml_branch_coverage=1 00:24:35.436 --rc genhtml_function_coverage=1 00:24:35.436 --rc genhtml_legend=1 00:24:35.436 --rc geninfo_all_blocks=1 00:24:35.436 --rc geninfo_unexecuted_blocks=1 00:24:35.436 00:24:35.436 ' 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:35.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.436 --rc genhtml_branch_coverage=1 00:24:35.436 --rc genhtml_function_coverage=1 00:24:35.436 --rc genhtml_legend=1 00:24:35.436 --rc geninfo_all_blocks=1 00:24:35.436 --rc geninfo_unexecuted_blocks=1 00:24:35.436 00:24:35.436 ' 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:35.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.436 --rc genhtml_branch_coverage=1 00:24:35.436 --rc genhtml_function_coverage=1 00:24:35.436 --rc genhtml_legend=1 00:24:35.436 --rc geninfo_all_blocks=1 00:24:35.436 --rc geninfo_unexecuted_blocks=1 00:24:35.436 00:24:35.436 ' 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:35.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.436 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:35.437 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:35.437 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:35.437 10:02:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.008 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:42.008 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:42.008 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:42.008 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:42.008 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:42.008 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:42.008 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:42.008 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:42.008 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:42.008 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:42.009 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:42.009 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:42.009 Found net devices under 0000:af:00.0: cvl_0_0 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:42.009 Found net devices under 0000:af:00.1: cvl_0_1 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:42.009 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:42.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:24:42.269 00:24:42.269 --- 10.0.0.2 ping statistics --- 00:24:42.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.269 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:42.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:24:42.269 00:24:42.269 --- 10.0.0.1 ping statistics --- 00:24:42.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.269 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=191077 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 191077 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 191077 ']' 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.269 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.269 [2024-12-11 10:02:51.748758] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:24:42.269 [2024-12-11 10:02:51.748811] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.269 [2024-12-11 10:02:51.830265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.529 [2024-12-11 10:02:51.871113] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.529 [2024-12-11 10:02:51.871148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.529 [2024-12-11 10:02:51.871154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.529 [2024-12-11 10:02:51.871163] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.529 [2024-12-11 10:02:51.871168] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.529 [2024-12-11 10:02:51.871699] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.529 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:42.529 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:42.529 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:42.529 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:42.529 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.529 [2024-12-11 10:02:52.007288] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.529 [2024-12-11 10:02:52.019468] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.529 null0 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.529 null1 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=191269 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 191269 /tmp/host.sock 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 191269 ']' 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:42.529 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.529 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.529 [2024-12-11 10:02:52.096926] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:24:42.529 [2024-12-11 10:02:52.096968] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid191269 ] 00:24:42.789 [2024-12-11 10:02:52.176910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.789 [2024-12-11 10:02:52.217356] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:42.789 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.048 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.049 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:43.049 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:43.049 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:43.049 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.049 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:43.049 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.049 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:43.049 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.049 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:43.049 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:43.049 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:43.049 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.049 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.049 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:43.049 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.049 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:43.049 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.308 [2024-12-11 10:02:52.641023] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:43.308 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:43.309 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:43.876 [2024-12-11 10:02:53.375681] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:43.876 [2024-12-11 10:02:53.375703] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:43.876 [2024-12-11 10:02:53.375716] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:44.135 [2024-12-11 10:02:53.502088] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:44.135 [2024-12-11 10:02:53.679965] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:44.135 [2024-12-11 10:02:53.680712] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1b4b2e0:1 started. 00:24:44.135 [2024-12-11 10:02:53.682061] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:44.135 [2024-12-11 10:02:53.682075] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:44.135 [2024-12-11 10:02:53.684361] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1b4b2e0 was disconnected and freed. delete nvme_qpair. 00:24:44.394 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:44.394 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:44.394 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:44.394 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:44.394 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:44.394 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.395 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.653 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:44.653 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:44.653 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:44.653 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:44.654 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:44.654 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:44.654 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:44.654 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:44.654 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:44.654 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:44.654 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:44.654 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.654 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.654 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.654 [2024-12-11 10:02:54.042397] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1b4b660:1 started. 00:24:44.654 [2024-12-11 10:02:54.045015] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1b4b660 was disconnected and freed. delete nvme_qpair. 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.654 [2024-12-11 10:02:54.141050] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:44.654 [2024-12-11 10:02:54.141650] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:44.654 [2024-12-11 10:02:54.141670] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.654 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:44.912 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.912 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:44.912 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:44.912 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:44.912 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:44.912 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:44.912 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:44.912 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:44.912 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:44.912 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:44.912 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.912 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.912 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:44.912 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:44.912 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:44.912 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.912 [2024-12-11 10:02:54.269035] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:44.912 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:44.912 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:44.912 [2024-12-11 10:02:54.330627] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:44.912 [2024-12-11 10:02:54.330661] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:44.912 [2024-12-11 10:02:54.330669] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:44.912 [2024-12-11 10:02:54.330673] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.850 [2024-12-11 10:02:55.400598] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:45.850 [2024-12-11 10:02:55.400619] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:45.850 [2024-12-11 10:02:55.404341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.850 [2024-12-11 10:02:55.404357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.850 [2024-12-11 10:02:55.404365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.850 [2024-12-11 10:02:55.404372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.850 [2024-12-11 10:02:55.404379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.850 [2024-12-11 10:02:55.404385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.850 [2024-12-11 10:02:55.404392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.850 [2024-12-11 10:02:55.404399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.850 [2024-12-11 10:02:55.404405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1b790 is same with the state(6) to be set 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.850 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:45.850 [2024-12-11 10:02:55.414355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1b790 (9): Bad file descriptor 00:24:46.111 [2024-12-11 10:02:55.424391] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:46.111 [2024-12-11 10:02:55.424405] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:46.111 [2024-12-11 10:02:55.424412] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:46.111 [2024-12-11 10:02:55.424417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:46.111 [2024-12-11 10:02:55.424432] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:46.111 [2024-12-11 10:02:55.424673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.111 [2024-12-11 10:02:55.424687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1b790 with addr=10.0.0.2, port=4420 00:24:46.111 [2024-12-11 10:02:55.424695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1b790 is same with the state(6) to be set 00:24:46.111 [2024-12-11 10:02:55.424707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1b790 (9): Bad file descriptor 00:24:46.111 [2024-12-11 10:02:55.424728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:46.111 [2024-12-11 10:02:55.424736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:46.111 [2024-12-11 10:02:55.424743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:46.111 [2024-12-11 10:02:55.424749] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:46.111 [2024-12-11 10:02:55.424754] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:46.111 [2024-12-11 10:02:55.424759] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:46.111 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.111 [2024-12-11 10:02:55.434462] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:46.111 [2024-12-11 10:02:55.434473] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:46.111 [2024-12-11 10:02:55.434477] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:46.111 [2024-12-11 10:02:55.434481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:46.111 [2024-12-11 10:02:55.434494] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:46.111 [2024-12-11 10:02:55.434665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.111 [2024-12-11 10:02:55.434675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1b790 with addr=10.0.0.2, port=4420 00:24:46.111 [2024-12-11 10:02:55.434683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1b790 is same with the state(6) to be set 00:24:46.111 [2024-12-11 10:02:55.434693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1b790 (9): Bad file descriptor 00:24:46.111 [2024-12-11 10:02:55.434703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:46.111 [2024-12-11 10:02:55.434709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:46.111 [2024-12-11 10:02:55.434715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:46.111 [2024-12-11 10:02:55.434721] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:46.111 [2024-12-11 10:02:55.434725] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:46.111 [2024-12-11 10:02:55.434729] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:46.111 [2024-12-11 10:02:55.444525] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:46.111 [2024-12-11 10:02:55.444536] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:46.111 [2024-12-11 10:02:55.444540] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:46.111 [2024-12-11 10:02:55.444544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:46.111 [2024-12-11 10:02:55.444558] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:46.111 [2024-12-11 10:02:55.444781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.111 [2024-12-11 10:02:55.444793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1b790 with addr=10.0.0.2, port=4420 00:24:46.111 [2024-12-11 10:02:55.444801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1b790 is same with the state(6) to be set 00:24:46.111 [2024-12-11 10:02:55.444812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1b790 (9): Bad file descriptor 00:24:46.111 [2024-12-11 10:02:55.444858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:46.111 [2024-12-11 10:02:55.444866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:46.111 [2024-12-11 10:02:55.444873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:46.111 [2024-12-11 10:02:55.444879] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:46.111 [2024-12-11 10:02:55.444884] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:46.111 [2024-12-11 10:02:55.444888] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:46.111 [2024-12-11 10:02:55.454589] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:46.111 [2024-12-11 10:02:55.454600] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:46.111 [2024-12-11 10:02:55.454604] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:46.111 [2024-12-11 10:02:55.454608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:46.111 [2024-12-11 10:02:55.454620] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:46.111 [2024-12-11 10:02:55.454863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.111 [2024-12-11 10:02:55.454875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1b790 with addr=10.0.0.2, port=4420 00:24:46.111 [2024-12-11 10:02:55.454883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1b790 is same with the state(6) to be set 00:24:46.111 [2024-12-11 10:02:55.454893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1b790 (9): Bad file descriptor 00:24:46.111 [2024-12-11 10:02:55.454908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:46.111 [2024-12-11 10:02:55.454915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:46.111 [2024-12-11 10:02:55.454922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:46.111 [2024-12-11 10:02:55.454928] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:46.111 [2024-12-11 10:02:55.454932] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:46.111 [2024-12-11 10:02:55.454939] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:46.111 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.111 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:46.111 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:46.111 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:46.111 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:46.111 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.112 [2024-12-11 10:02:55.464650] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:46.112 [2024-12-11 10:02:55.464662] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:46.112 [2024-12-11 10:02:55.464666] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:46.112 [2024-12-11 10:02:55.464669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:46.112 [2024-12-11 10:02:55.464682] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:46.112 [2024-12-11 10:02:55.464839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.112 [2024-12-11 10:02:55.464849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1b790 with addr=10.0.0.2, port=4420 00:24:46.112 [2024-12-11 10:02:55.464856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1b790 is same with the state(6) to be set 00:24:46.112 [2024-12-11 10:02:55.464866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1b790 (9): Bad file descriptor 00:24:46.112 [2024-12-11 10:02:55.465498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:46.112 [2024-12-11 10:02:55.465509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:46.112 [2024-12-11 10:02:55.465516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:46.112 [2024-12-11 10:02:55.465522] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:46.112 [2024-12-11 10:02:55.465526] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:46.112 [2024-12-11 10:02:55.465530] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:46.112 [2024-12-11 10:02:55.474713] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:46.112 [2024-12-11 10:02:55.474724] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:46.112 [2024-12-11 10:02:55.474732] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:46.112 [2024-12-11 10:02:55.474736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:46.112 [2024-12-11 10:02:55.474748] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:46.112 [2024-12-11 10:02:55.474994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.112 [2024-12-11 10:02:55.475006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1b790 with addr=10.0.0.2, port=4420 00:24:46.112 [2024-12-11 10:02:55.475014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1b790 is same with the state(6) to be set 00:24:46.112 [2024-12-11 10:02:55.475024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1b790 (9): Bad file descriptor 00:24:46.112 [2024-12-11 10:02:55.475033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:46.112 [2024-12-11 10:02:55.475040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:46.112 [2024-12-11 10:02:55.475046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:46.112 [2024-12-11 10:02:55.475052] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:46.112 [2024-12-11 10:02:55.475056] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:46.112 [2024-12-11 10:02:55.475060] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:46.112 [2024-12-11 10:02:55.484779] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:46.112 [2024-12-11 10:02:55.484789] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:46.112 [2024-12-11 10:02:55.484793] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:46.112 [2024-12-11 10:02:55.484797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:46.112 [2024-12-11 10:02:55.484809] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:46.112 [2024-12-11 10:02:55.484986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.112 [2024-12-11 10:02:55.484997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1b790 with addr=10.0.0.2, port=4420 00:24:46.112 [2024-12-11 10:02:55.485004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1b790 is same with the state(6) to be set 00:24:46.112 [2024-12-11 10:02:55.485014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1b790 (9): Bad file descriptor 00:24:46.112 [2024-12-11 10:02:55.485029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:46.112 [2024-12-11 10:02:55.485036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:46.112 [2024-12-11 10:02:55.485042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:46.112 [2024-12-11 10:02:55.485047] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:46.112 [2024-12-11 10:02:55.485051] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:46.112 [2024-12-11 10:02:55.485055] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:46.112 [2024-12-11 10:02:55.486579] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:46.112 [2024-12-11 10:02:55.486599] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:46.112 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.113 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.372 10:02:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.309 [2024-12-11 10:02:56.824361] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:47.309 [2024-12-11 10:02:56.824378] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:47.309 [2024-12-11 10:02:56.824388] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:47.568 [2024-12-11 10:02:56.910647] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:47.828 [2024-12-11 10:02:57.211924] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:47.828 [2024-12-11 10:02:57.212427] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1b57380:1 started. 00:24:47.828 [2024-12-11 10:02:57.213910] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:47.828 [2024-12-11 10:02:57.213935] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.828 [2024-12-11 10:02:57.223459] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1b57380 was disconnected and freed. delete nvme_qpair. 00:24:47.828 request: 00:24:47.828 { 00:24:47.828 "name": "nvme", 00:24:47.828 "trtype": "tcp", 00:24:47.828 "traddr": "10.0.0.2", 00:24:47.828 "adrfam": "ipv4", 00:24:47.828 "trsvcid": "8009", 00:24:47.828 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:47.828 "wait_for_attach": true, 00:24:47.828 "method": "bdev_nvme_start_discovery", 00:24:47.828 "req_id": 1 00:24:47.828 } 00:24:47.828 Got JSON-RPC error response 00:24:47.828 response: 00:24:47.828 { 00:24:47.828 "code": -17, 00:24:47.828 "message": "File exists" 00:24:47.828 } 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.828 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.828 request: 00:24:47.828 { 00:24:47.829 "name": "nvme_second", 00:24:47.829 "trtype": "tcp", 00:24:47.829 "traddr": "10.0.0.2", 00:24:47.829 "adrfam": "ipv4", 00:24:47.829 "trsvcid": "8009", 00:24:47.829 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:47.829 "wait_for_attach": true, 00:24:47.829 "method": "bdev_nvme_start_discovery", 00:24:47.829 "req_id": 1 00:24:47.829 } 00:24:47.829 Got JSON-RPC error response 00:24:47.829 response: 00:24:47.829 { 00:24:47.829 "code": -17, 00:24:47.829 "message": "File exists" 00:24:47.829 } 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:47.829 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:48.088 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.088 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:48.088 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:48.088 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:48.088 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:48.088 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:48.088 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:48.088 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:48.088 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:48.088 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:48.088 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.088 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.026 [2024-12-11 10:02:58.457377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.026 [2024-12-11 10:02:58.457402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b33620 with addr=10.0.0.2, port=8010 00:24:49.026 [2024-12-11 10:02:58.457417] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:49.026 [2024-12-11 10:02:58.457423] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:49.027 [2024-12-11 10:02:58.457429] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:50.064 [2024-12-11 10:02:59.459800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.064 [2024-12-11 10:02:59.459825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b33620 with addr=10.0.0.2, port=8010 00:24:50.064 [2024-12-11 10:02:59.459838] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:50.065 [2024-12-11 10:02:59.459844] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:50.065 [2024-12-11 10:02:59.459850] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:51.001 [2024-12-11 10:03:00.461978] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:51.001 request: 00:24:51.001 { 00:24:51.001 "name": "nvme_second", 00:24:51.001 "trtype": "tcp", 00:24:51.001 "traddr": "10.0.0.2", 00:24:51.001 "adrfam": "ipv4", 00:24:51.001 "trsvcid": "8010", 00:24:51.001 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:51.001 "wait_for_attach": false, 00:24:51.001 "attach_timeout_ms": 3000, 00:24:51.001 "method": "bdev_nvme_start_discovery", 00:24:51.001 "req_id": 1 00:24:51.001 } 00:24:51.001 Got JSON-RPC error response 00:24:51.001 response: 00:24:51.001 { 00:24:51.001 "code": -110, 00:24:51.001 "message": "Connection timed out" 00:24:51.001 } 00:24:51.001 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:51.001 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:51.001 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:51.001 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:51.001 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:51.001 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:51.001 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:51.001 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:51.001 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.001 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:51.001 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.001 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:51.001 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.002 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:51.002 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:51.002 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 191269 00:24:51.002 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:51.002 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:51.002 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:51.002 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:51.002 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:51.002 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:51.002 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:51.002 rmmod nvme_tcp 00:24:51.002 rmmod nvme_fabrics 00:24:51.002 rmmod nvme_keyring 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 191077 ']' 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 191077 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 191077 ']' 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 191077 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 191077 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 191077' 00:24:51.261 killing process with pid 191077 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 191077 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 191077 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.261 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.798 10:03:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:53.798 00:24:53.798 real 0m18.127s 00:24:53.798 user 0m20.983s 00:24:53.798 sys 0m6.405s 00:24:53.798 10:03:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:53.798 10:03:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.798 ************************************ 00:24:53.798 END TEST nvmf_host_discovery 00:24:53.798 ************************************ 00:24:53.798 10:03:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:53.798 10:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:53.798 10:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:53.798 10:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.798 ************************************ 00:24:53.798 START TEST nvmf_host_multipath_status 00:24:53.798 ************************************ 00:24:53.798 10:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:53.798 * Looking for test storage... 00:24:53.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:53.798 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:53.798 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:24:53.798 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:53.798 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:53.798 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:53.798 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:53.798 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:53.798 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:53.798 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:53.798 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:53.798 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:53.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.799 --rc genhtml_branch_coverage=1 00:24:53.799 --rc genhtml_function_coverage=1 00:24:53.799 --rc genhtml_legend=1 00:24:53.799 --rc geninfo_all_blocks=1 00:24:53.799 --rc geninfo_unexecuted_blocks=1 00:24:53.799 00:24:53.799 ' 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:53.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.799 --rc genhtml_branch_coverage=1 00:24:53.799 --rc genhtml_function_coverage=1 00:24:53.799 --rc genhtml_legend=1 00:24:53.799 --rc geninfo_all_blocks=1 00:24:53.799 --rc geninfo_unexecuted_blocks=1 00:24:53.799 00:24:53.799 ' 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:53.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.799 --rc genhtml_branch_coverage=1 00:24:53.799 --rc genhtml_function_coverage=1 00:24:53.799 --rc genhtml_legend=1 00:24:53.799 --rc geninfo_all_blocks=1 00:24:53.799 --rc geninfo_unexecuted_blocks=1 00:24:53.799 00:24:53.799 ' 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:53.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.799 --rc genhtml_branch_coverage=1 00:24:53.799 --rc genhtml_function_coverage=1 00:24:53.799 --rc genhtml_legend=1 00:24:53.799 --rc geninfo_all_blocks=1 00:24:53.799 --rc geninfo_unexecuted_blocks=1 00:24:53.799 00:24:53.799 ' 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:53.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:53.799 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:53.800 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:53.800 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:53.800 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:53.800 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:53.800 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:53.800 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.800 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:53.800 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:53.800 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:53.800 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.800 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.800 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.800 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:53.800 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:53.800 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:53.800 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:00.373 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.373 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:00.373 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:00.373 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:00.373 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:00.373 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:00.373 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:00.373 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:00.373 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:00.373 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:00.373 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:00.373 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:00.374 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:00.374 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:00.374 Found net devices under 0000:af:00.0: cvl_0_0 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:00.374 Found net devices under 0000:af:00.1: cvl_0_1 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:00.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:25:00.374 00:25:00.374 --- 10.0.0.2 ping statistics --- 00:25:00.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.374 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:25:00.374 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:25:00.374 00:25:00.374 --- 10.0.0.1 ping statistics --- 00:25:00.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.375 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=197051 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 197051 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 197051 ']' 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.375 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:00.633 [2024-12-11 10:03:09.983144] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:25:00.633 [2024-12-11 10:03:09.983189] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.633 [2024-12-11 10:03:10.069757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:00.634 [2024-12-11 10:03:10.111988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.634 [2024-12-11 10:03:10.112026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.634 [2024-12-11 10:03:10.112033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.634 [2024-12-11 10:03:10.112038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.634 [2024-12-11 10:03:10.112043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.634 [2024-12-11 10:03:10.113229] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.634 [2024-12-11 10:03:10.113229] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.571 10:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.571 10:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:01.571 10:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:01.571 10:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:01.571 10:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:01.571 10:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.571 10:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=197051 00:25:01.571 10:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:01.571 [2024-12-11 10:03:11.012602] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.571 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:01.830 Malloc0 00:25:01.830 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:02.088 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:02.088 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.346 [2024-12-11 10:03:11.825954] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.346 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:02.604 [2024-12-11 10:03:12.042589] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:02.604 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=197611 00:25:02.604 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:02.604 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:02.604 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 197611 /var/tmp/bdevperf.sock 00:25:02.604 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 197611 ']' 00:25:02.604 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:02.604 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.604 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:02.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:02.604 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.604 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:02.862 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:02.862 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:02.862 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:03.119 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:03.377 Nvme0n1 00:25:03.377 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:03.635 Nvme0n1 00:25:03.635 10:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:03.635 10:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:06.167 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:06.167 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:06.167 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:06.167 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:07.104 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:07.104 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:07.104 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.104 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:07.363 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.363 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:07.363 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.363 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:07.622 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:07.622 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:07.622 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.622 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:07.881 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.881 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:07.881 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.881 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:08.140 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.140 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:08.140 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.140 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:08.140 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.140 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:08.140 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.140 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:08.399 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.399 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:08.399 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:08.658 10:03:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:08.916 10:03:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:09.850 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:09.850 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:09.850 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.850 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:10.108 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:10.108 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:10.108 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.108 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:10.366 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.366 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:10.366 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.366 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:10.366 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.366 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:10.366 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:10.366 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.624 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.624 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:10.624 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.624 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:10.882 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.882 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:10.882 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.882 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:11.140 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.140 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:11.140 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:11.398 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:11.656 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:12.590 10:03:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:12.590 10:03:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:12.590 10:03:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.590 10:03:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:12.848 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.848 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:12.848 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.848 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:12.848 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:12.848 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:12.848 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.848 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:13.106 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.106 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:13.106 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.106 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:13.364 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.364 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:13.364 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.364 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:13.622 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.622 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:13.622 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:13.622 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.880 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.880 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:13.880 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:13.880 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:14.138 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:15.512 10:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:15.512 10:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:15.512 10:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.512 10:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:15.512 10:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.512 10:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:15.512 10:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.512 10:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:15.512 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:15.512 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:15.512 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.512 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:15.769 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.769 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:15.769 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.769 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:16.027 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.027 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:16.027 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:16.027 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.285 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.285 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:16.285 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.285 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:16.543 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:16.543 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:16.543 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:16.543 10:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:16.816 10:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:18.188 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:18.188 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:18.188 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.188 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:18.188 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.188 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:18.188 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.188 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:18.188 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.189 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:18.189 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.189 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:18.447 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.447 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:18.447 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.447 10:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:18.705 10:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.705 10:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:18.705 10:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.705 10:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:18.962 10:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.962 10:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:18.962 10:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.962 10:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:18.962 10:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:19.220 10:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:19.221 10:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:19.221 10:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:19.478 10:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:20.413 10:03:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:20.413 10:03:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:20.413 10:03:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.413 10:03:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:20.671 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:20.671 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:20.671 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.671 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:20.929 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.929 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:20.929 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.929 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:21.188 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.188 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:21.188 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.188 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:21.188 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.188 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:21.188 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.188 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:21.447 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:21.447 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:21.447 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.447 10:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:21.705 10:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.705 10:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:21.963 10:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:21.963 10:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:22.221 10:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:22.480 10:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:23.416 10:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:23.416 10:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:23.416 10:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.416 10:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:23.675 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.675 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:23.675 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.675 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:23.933 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.933 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:23.933 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.933 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:24.192 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.193 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:24.193 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.193 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:24.193 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.193 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:24.193 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.193 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:24.451 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.451 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:24.451 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.451 10:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:24.710 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.710 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:24.710 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:24.971 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:24.971 10:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:26.030 10:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:26.030 10:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:26.030 10:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.030 10:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:26.288 10:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:26.288 10:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:26.288 10:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.288 10:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:26.547 10:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.547 10:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:26.547 10:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.547 10:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:26.806 10:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.807 10:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:26.807 10:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.807 10:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:27.066 10:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.066 10:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:27.066 10:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.066 10:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:27.066 10:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.066 10:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:27.066 10:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.066 10:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:27.324 10:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.324 10:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:27.324 10:03:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:27.583 10:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:27.842 10:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:28.777 10:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:28.777 10:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:28.777 10:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.777 10:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:29.036 10:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.036 10:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:29.036 10:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.036 10:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:29.294 10:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.294 10:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:29.294 10:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.294 10:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:29.553 10:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.553 10:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:29.553 10:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.553 10:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:29.553 10:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.553 10:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:29.553 10:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.553 10:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:29.812 10:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.812 10:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:29.812 10:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:29.812 10:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.070 10:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.070 10:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:30.070 10:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:30.329 10:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:30.587 10:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:31.520 10:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:31.520 10:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:31.520 10:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.520 10:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:31.778 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.778 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:31.778 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.778 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:32.036 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.036 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:32.036 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.036 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:32.036 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.036 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.036 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.036 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:32.295 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.295 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:32.295 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.295 10:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:32.553 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.553 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:32.553 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.553 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:32.812 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.812 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 197611 00:25:32.812 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 197611 ']' 00:25:32.812 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 197611 00:25:32.812 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:32.812 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:32.812 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 197611 00:25:32.812 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:32.812 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:32.812 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 197611' 00:25:32.812 killing process with pid 197611 00:25:32.812 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 197611 00:25:32.812 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 197611 00:25:32.812 { 00:25:32.812 "results": [ 00:25:32.812 { 00:25:32.812 "job": "Nvme0n1", 00:25:32.812 "core_mask": "0x4", 00:25:32.812 "workload": "verify", 00:25:32.812 "status": "terminated", 00:25:32.812 "verify_range": { 00:25:32.812 "start": 0, 00:25:32.812 "length": 16384 00:25:32.812 }, 00:25:32.812 "queue_depth": 128, 00:25:32.812 "io_size": 4096, 00:25:32.812 "runtime": 28.95322, 00:25:32.812 "iops": 10695.563395021349, 00:25:32.812 "mibps": 41.77954451180214, 00:25:32.812 "io_failed": 0, 00:25:32.812 "io_timeout": 0, 00:25:32.812 "avg_latency_us": 11945.41807494313, 00:25:32.812 "min_latency_us": 1263.9085714285713, 00:25:32.812 "max_latency_us": 3083812.083809524 00:25:32.812 } 00:25:32.812 ], 00:25:32.812 "core_count": 1 00:25:32.812 } 00:25:33.098 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 197611 00:25:33.098 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:33.098 [2024-12-11 10:03:12.114814] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:25:33.098 [2024-12-11 10:03:12.114868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid197611 ] 00:25:33.098 [2024-12-11 10:03:12.196631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.098 [2024-12-11 10:03:12.235452] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:33.098 Running I/O for 90 seconds... 00:25:33.098 11719.00 IOPS, 45.78 MiB/s [2024-12-11T09:03:42.673Z] 11681.00 IOPS, 45.63 MiB/s [2024-12-11T09:03:42.673Z] 11712.00 IOPS, 45.75 MiB/s [2024-12-11T09:03:42.673Z] 11684.50 IOPS, 45.64 MiB/s [2024-12-11T09:03:42.673Z] 11669.00 IOPS, 45.58 MiB/s [2024-12-11T09:03:42.673Z] 11653.83 IOPS, 45.52 MiB/s [2024-12-11T09:03:42.673Z] 11653.57 IOPS, 45.52 MiB/s [2024-12-11T09:03:42.673Z] 11646.38 IOPS, 45.49 MiB/s [2024-12-11T09:03:42.673Z] 11641.22 IOPS, 45.47 MiB/s [2024-12-11T09:03:42.673Z] 11620.40 IOPS, 45.39 MiB/s [2024-12-11T09:03:42.673Z] 11619.91 IOPS, 45.39 MiB/s [2024-12-11T09:03:42.673Z] 11607.42 IOPS, 45.34 MiB/s [2024-12-11T09:03:42.673Z] [2024-12-11 10:03:26.099775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.098 [2024-12-11 10:03:26.099813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:33.098 [2024-12-11 10:03:26.099835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.098 [2024-12-11 10:03:26.099843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:33.098 [2024-12-11 10:03:26.099856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.098 [2024-12-11 10:03:26.099864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:33.098 [2024-12-11 10:03:26.099876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.098 [2024-12-11 10:03:26.099884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:33.098 [2024-12-11 10:03:26.099896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.098 [2024-12-11 10:03:26.099903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:33.098 [2024-12-11 10:03:26.099915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.098 [2024-12-11 10:03:26.099923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:33.098 [2024-12-11 10:03:26.099935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.098 [2024-12-11 10:03:26.099942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:33.098 [2024-12-11 10:03:26.099954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.098 [2024-12-11 10:03:26.099961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:33.098 [2024-12-11 10:03:26.099974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.098 [2024-12-11 10:03:26.099980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:33.098 [2024-12-11 10:03:26.099992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.098 [2024-12-11 10:03:26.100008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:33.098 [2024-12-11 10:03:26.100021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.098 [2024-12-11 10:03:26.100028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:33.098 [2024-12-11 10:03:26.100041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.098 [2024-12-11 10:03:26.100049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:33.098 [2024-12-11 10:03:26.100061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.098 [2024-12-11 10:03:26.100067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:33.098 [2024-12-11 10:03:26.100079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.098 [2024-12-11 10:03:26.100086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:33.098 [2024-12-11 10:03:26.100099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.098 [2024-12-11 10:03:26.100106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:33.098 [2024-12-11 10:03:26.100118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.098 [2024-12-11 10:03:26.100125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:33.098 [2024-12-11 10:03:26.100137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.098 [2024-12-11 10:03:26.100145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:33.098 [2024-12-11 10:03:26.100158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.098 [2024-12-11 10:03:26.100165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:33.098 [2024-12-11 10:03:26.100177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.100709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.099 [2024-12-11 10:03:26.100729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.099 [2024-12-11 10:03:26.100748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.099 [2024-12-11 10:03:26.100766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.099 [2024-12-11 10:03:26.100786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.099 [2024-12-11 10:03:26.100805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.100818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.099 [2024-12-11 10:03:26.100825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.101280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.099 [2024-12-11 10:03:26.101294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.101309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.101316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.099 [2024-12-11 10:03:26.101329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.099 [2024-12-11 10:03:26.101336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.101985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.101992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.102004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.100 [2024-12-11 10:03:26.102011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:33.100 [2024-12-11 10:03:26.102023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.101 [2024-12-11 10:03:26.102030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.101 [2024-12-11 10:03:26.102049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.101 [2024-12-11 10:03:26.102413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.101 [2024-12-11 10:03:26.102434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.101 [2024-12-11 10:03:26.102454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.101 [2024-12-11 10:03:26.102633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.101 [2024-12-11 10:03:26.102806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:33.101 [2024-12-11 10:03:26.102992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.101 [2024-12-11 10:03:26.102999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.102 [2024-12-11 10:03:26.103018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.102 [2024-12-11 10:03:26.103037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.102 [2024-12-11 10:03:26.103056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.102 [2024-12-11 10:03:26.103075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.102 [2024-12-11 10:03:26.103093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.103984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.103991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.104003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.104009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.104022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.104029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.104041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.104047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.104061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.104067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.104079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.102 [2024-12-11 10:03:26.104086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:33.102 [2024-12-11 10:03:26.104098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.104106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.104118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.104125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.104138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.104144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.104157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.104165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.104177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.104184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.104196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.104203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.104215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.104246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.104259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.104266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.104278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.104285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.104296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.104303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.104317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.104324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.104336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.104343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.104355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.104362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.104374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.104380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.104392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.104399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.104411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.103 [2024-12-11 10:03:26.104417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.104429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.103 [2024-12-11 10:03:26.104437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.104449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.103 [2024-12-11 10:03:26.104456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.114411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.103 [2024-12-11 10:03:26.114427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.114440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.103 [2024-12-11 10:03:26.114448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.114460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.103 [2024-12-11 10:03:26.114467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.114480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.103 [2024-12-11 10:03:26.114487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.114500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.114511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.114523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.114531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.114543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.114550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.114562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.114569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.114581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.114588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.114600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.114606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.114618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.114625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.114638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.103 [2024-12-11 10:03:26.114644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:33.103 [2024-12-11 10:03:26.114657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.114663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.104 [2024-12-11 10:03:26.115804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:33.104 [2024-12-11 10:03:26.115816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.104 [2024-12-11 10:03:26.115823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.115836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.115844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.115856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.115862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.115875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.115884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.115895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.115903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.115915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.115921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.115934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.115942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.115956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.115966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.115981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.105 [2024-12-11 10:03:26.115990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.105 [2024-12-11 10:03:26.116164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.105 [2024-12-11 10:03:26.116464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.105 [2024-12-11 10:03:26.116483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.105 [2024-12-11 10:03:26.116504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.105 [2024-12-11 10:03:26.116524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:33.105 [2024-12-11 10:03:26.116536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.116543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.116555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.116561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.116573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.116580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.116592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.116599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.116612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.116619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.116631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.116638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.116650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.116657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.116669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.116676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.116688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.116698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.116713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.116722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.116737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.116747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.106 [2024-12-11 10:03:26.117909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:33.106 [2024-12-11 10:03:26.117922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.117929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.117941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.117947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.117960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.117967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.117979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.117986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.117998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.107 [2024-12-11 10:03:26.118061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.107 [2024-12-11 10:03:26.118081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.107 [2024-12-11 10:03:26.118101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.107 [2024-12-11 10:03:26.118121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.107 [2024-12-11 10:03:26.118140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.107 [2024-12-11 10:03:26.118159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.107 [2024-12-11 10:03:26.118178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.118985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.118994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.119014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.119023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.119039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.119048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:33.107 [2024-12-11 10:03:26.119068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.107 [2024-12-11 10:03:26.119079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.119096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.119105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.119121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.119131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.119147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.119156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.119172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.119182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.119198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.125349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.125384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.125409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.125435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.125461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.125490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.125516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.125541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.125567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.125593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.125618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.125644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.125669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.125695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.125720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.108 [2024-12-11 10:03:26.125746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.108 [2024-12-11 10:03:26.125771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.108 [2024-12-11 10:03:26.125799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.108 [2024-12-11 10:03:26.125825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.108 [2024-12-11 10:03:26.125851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.108 [2024-12-11 10:03:26.125876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.108 [2024-12-11 10:03:26.125902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.108 [2024-12-11 10:03:26.125928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.108 [2024-12-11 10:03:26.125953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.108 [2024-12-11 10:03:26.125979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.125995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.108 [2024-12-11 10:03:26.126004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.126021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.108 [2024-12-11 10:03:26.126030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:33.108 [2024-12-11 10:03:26.126046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.126055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.126072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.126081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.126097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.126108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.126125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.126134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.126150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.126160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.126769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.109 [2024-12-11 10:03:26.126787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.126806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.126815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.126832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.126841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.126858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.126867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.126883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.126893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.126910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.126918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.126935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.126944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.126960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.126969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.126986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.126995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.127022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.127055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.127081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.127108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.127135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.127163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.109 [2024-12-11 10:03:26.127190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.109 [2024-12-11 10:03:26.127225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.109 [2024-12-11 10:03:26.127253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.109 [2024-12-11 10:03:26.127279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.109 [2024-12-11 10:03:26.127305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.109 [2024-12-11 10:03:26.127330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.109 [2024-12-11 10:03:26.127357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.109 [2024-12-11 10:03:26.127386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.109 [2024-12-11 10:03:26.127412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.109 [2024-12-11 10:03:26.127438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.109 [2024-12-11 10:03:26.127463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.109 [2024-12-11 10:03:26.127490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.109 [2024-12-11 10:03:26.127516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.109 [2024-12-11 10:03:26.127542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.109 [2024-12-11 10:03:26.127567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.109 [2024-12-11 10:03:26.127593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:33.109 [2024-12-11 10:03:26.127609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.109 [2024-12-11 10:03:26.127619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.127635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.127644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.127660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.127669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.127686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.127697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.127713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.127722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.127739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.127748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.127764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.127773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.127789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.127798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.127815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.127824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.127840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.127850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.127866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.127875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.127891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.127900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.127917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.127926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.127944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.127953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.127970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.127979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.127996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.128006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.128033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.128060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.128085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.128111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.128136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.128162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.128187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.128213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.128243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.128268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.128294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.128320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.128348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.128374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.110 [2024-12-11 10:03:26.128400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.110 [2024-12-11 10:03:26.128426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.110 [2024-12-11 10:03:26.128451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.110 [2024-12-11 10:03:26.128477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.110 [2024-12-11 10:03:26.128502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.110 [2024-12-11 10:03:26.128528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:33.110 [2024-12-11 10:03:26.128545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.110 [2024-12-11 10:03:26.128554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.128570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.111 [2024-12-11 10:03:26.128579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.128596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.128605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.128622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.128631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.128648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.128658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.128675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.128684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.128701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.128710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.128726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.128735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.128752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.128761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.129827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.129849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.129875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.129888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.129912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.129925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.129948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.129961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.129984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.129996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.111 [2024-12-11 10:03:26.130835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:33.111 [2024-12-11 10:03:26.130858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.112 [2024-12-11 10:03:26.130870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.130893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.112 [2024-12-11 10:03:26.130906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.130929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.112 [2024-12-11 10:03:26.130942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.130965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.112 [2024-12-11 10:03:26.130977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.112 [2024-12-11 10:03:26.131017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.112 [2024-12-11 10:03:26.131054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.112 [2024-12-11 10:03:26.131090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.131126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.131161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.131196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.131239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.131275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.131311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.131347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.131382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.112 [2024-12-11 10:03:26.131418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.131455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.131490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.131525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.131560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.131595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.131631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.131654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.131667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.132447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.132468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.132494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.112 [2024-12-11 10:03:26.132507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.132531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.132543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.132566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.132579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.132601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.132614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.132637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.132653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.132677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.112 [2024-12-11 10:03:26.132689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.112 [2024-12-11 10:03:26.132712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.113 [2024-12-11 10:03:26.132725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.132747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.113 [2024-12-11 10:03:26.132760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.132783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.113 [2024-12-11 10:03:26.132795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.132818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.113 [2024-12-11 10:03:26.132830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.132853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.113 [2024-12-11 10:03:26.132866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.132888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.113 [2024-12-11 10:03:26.132901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.132923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.113 [2024-12-11 10:03:26.132937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.132960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.113 [2024-12-11 10:03:26.132972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.132995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.113 [2024-12-11 10:03:26.133008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.113 [2024-12-11 10:03:26.133043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:33.113 [2024-12-11 10:03:26.133964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.113 [2024-12-11 10:03:26.133979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.114 [2024-12-11 10:03:26.134734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.114 [2024-12-11 10:03:26.134770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.114 [2024-12-11 10:03:26.134805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.114 [2024-12-11 10:03:26.134841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.114 [2024-12-11 10:03:26.134882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.114 [2024-12-11 10:03:26.134918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.114 [2024-12-11 10:03:26.134953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.134976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.134989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.135011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.135024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.135046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.135059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.135082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.135095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.135117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.135130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.135153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.135166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.136272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.136291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.136317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.136330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.136353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.114 [2024-12-11 10:03:26.136365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:33.114 [2024-12-11 10:03:26.136389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.136405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.136428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.136441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.136464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.136477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.136499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.136513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.136535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.136549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.136571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.136584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.136608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.136620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.136643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.136655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.136678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.136690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.136713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.136725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.136747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.136760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.136782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.136795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.136817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.136832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.136855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.136867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.136890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.136902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.136925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.136938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.136960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.136973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.136995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.137008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.137044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.137080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.137115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.137150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.137185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.137227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.137263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.137300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.137335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.137370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.137405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.137441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.137478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.137514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.115 [2024-12-11 10:03:26.137549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.115 [2024-12-11 10:03:26.137584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.115 [2024-12-11 10:03:26.137620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.115 [2024-12-11 10:03:26.137655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:33.115 [2024-12-11 10:03:26.137678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.137691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.137715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.137728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.137751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.137763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.137786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.137799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.137822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.137834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.137857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.116 [2024-12-11 10:03:26.137870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.137892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.137905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.137927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.137940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.137963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.137975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.137998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.138011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.138033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.138045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.138069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.138081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.138882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.138903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.138929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.138947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.138969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.116 [2024-12-11 10:03:26.138982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.139027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.139050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.139073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.139096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.139118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.139141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.139164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.139187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.139210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.139238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.139264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.139288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.139312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.139335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.116 [2024-12-11 10:03:26.139359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.116 [2024-12-11 10:03:26.139383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.116 [2024-12-11 10:03:26.139406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.116 [2024-12-11 10:03:26.139429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.116 [2024-12-11 10:03:26.139452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.116 [2024-12-11 10:03:26.139475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.116 [2024-12-11 10:03:26.139498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:33.116 [2024-12-11 10:03:26.139513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.116 [2024-12-11 10:03:26.139521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.139981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.139990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.140005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.140013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.140028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.140036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.140050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.140059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.140074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.140082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.140096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.140105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.140119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.140129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.140144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.140153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.140169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.140177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:33.117 [2024-12-11 10:03:26.140191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.117 [2024-12-11 10:03:26.140199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.140226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.140249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.140272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.140295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.140318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.140341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.140364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.140386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.140409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.140434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.118 [2024-12-11 10:03:26.140457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.118 [2024-12-11 10:03:26.140480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.118 [2024-12-11 10:03:26.140503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.118 [2024-12-11 10:03:26.140528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.118 [2024-12-11 10:03:26.140551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.118 [2024-12-11 10:03:26.140573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.118 [2024-12-11 10:03:26.140596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.140619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.140644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.140669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.140693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.140713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.140722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.141430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.141444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.141461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.141470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.141484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.141493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.141507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.141516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.141531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.141539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.141554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.141563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.141578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.141588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.141603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.141611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.141627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.141636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.141651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.141659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.141674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.141682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.141697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.141708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.141723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.141731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:33.118 [2024-12-11 10:03:26.141746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.118 [2024-12-11 10:03:26.141754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.141769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.141777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.141792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.141800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.141815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.141823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.141837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.141846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.141860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.141869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.141883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.141891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.141906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.141914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.141929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.141938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.141952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.141960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.141975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.141983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.142008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.142031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.142054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.142078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.142101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.142124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.142147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.142170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.142193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.142223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.142246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.142269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.142294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.119 [2024-12-11 10:03:26.142317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.119 [2024-12-11 10:03:26.142341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.119 [2024-12-11 10:03:26.142364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.119 [2024-12-11 10:03:26.142389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.119 [2024-12-11 10:03:26.142412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.119 [2024-12-11 10:03:26.142435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.119 [2024-12-11 10:03:26.142458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.119 [2024-12-11 10:03:26.142481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.119 [2024-12-11 10:03:26.142506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.119 [2024-12-11 10:03:26.142529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.119 [2024-12-11 10:03:26.142553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.119 [2024-12-11 10:03:26.142580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:33.119 [2024-12-11 10:03:26.142595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.119 [2024-12-11 10:03:26.142603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.142618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.142626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.143140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.143165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.143188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.120 [2024-12-11 10:03:26.143212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.143241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.143265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.143288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.143311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.143333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.143362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.143385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.143408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.143431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.143454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.143477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.143500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.143523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.143546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.120 [2024-12-11 10:03:26.143569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.120 [2024-12-11 10:03:26.143592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.120 [2024-12-11 10:03:26.143615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.120 [2024-12-11 10:03:26.143638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.120 [2024-12-11 10:03:26.143663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.120 [2024-12-11 10:03:26.143685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.120 [2024-12-11 10:03:26.143708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.120 [2024-12-11 10:03:26.143731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.120 [2024-12-11 10:03:26.143754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.120 [2024-12-11 10:03:26.143776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.120 [2024-12-11 10:03:26.143799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.120 [2024-12-11 10:03:26.143822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.120 [2024-12-11 10:03:26.143846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.120 [2024-12-11 10:03:26.143869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.120 [2024-12-11 10:03:26.143892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.120 [2024-12-11 10:03:26.143915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:33.120 [2024-12-11 10:03:26.143934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.120 [2024-12-11 10:03:26.143943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.143958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.143966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.143981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.143990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.121 [2024-12-11 10:03:26.144641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.121 [2024-12-11 10:03:26.144664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.121 [2024-12-11 10:03:26.144688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.121 [2024-12-11 10:03:26.144711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.121 [2024-12-11 10:03:26.144737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.121 [2024-12-11 10:03:26.144760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.121 [2024-12-11 10:03:26.144783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:33.121 [2024-12-11 10:03:26.144801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.121 [2024-12-11 10:03:26.144810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.145985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.145999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.146008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.146025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.146035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.146050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.146058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.146073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.146081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.146096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.146104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.146119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.146129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.146144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.146153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.146168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.146177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.146191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.146200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:33.122 [2024-12-11 10:03:26.146215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.122 [2024-12-11 10:03:26.146229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.123 [2024-12-11 10:03:26.146252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.123 [2024-12-11 10:03:26.146276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.123 [2024-12-11 10:03:26.146299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.123 [2024-12-11 10:03:26.146322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.123 [2024-12-11 10:03:26.146349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.123 [2024-12-11 10:03:26.146372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.123 [2024-12-11 10:03:26.146395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.123 [2024-12-11 10:03:26.146418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.123 [2024-12-11 10:03:26.146441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.123 [2024-12-11 10:03:26.146465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.146488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.146513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.146537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.146559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.146582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.146605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.146629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.146652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.123 [2024-12-11 10:03:26.146675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.146698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.146722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.146744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.146759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.146767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.147325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.147339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.147357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.147365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.147380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.147388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.147403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.147411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.147426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.123 [2024-12-11 10:03:26.147434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.147449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.147460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.147475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.147483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.147498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.147506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.147520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.147529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.147543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.147552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.147566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.147574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.147589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.147597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.147612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.123 [2024-12-11 10:03:26.147620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:33.123 [2024-12-11 10:03:26.147635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.124 [2024-12-11 10:03:26.147643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.147658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.124 [2024-12-11 10:03:26.147666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.147681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.124 [2024-12-11 10:03:26.147689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.147704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.124 [2024-12-11 10:03:26.147712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.147726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.124 [2024-12-11 10:03:26.147737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.147751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.124 [2024-12-11 10:03:26.147760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.147774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.124 [2024-12-11 10:03:26.147782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.147797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.147805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.147820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.147828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.147843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.147851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.147866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.147874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.147888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.147897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.147911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.147920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.147934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.147942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.147957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.147965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.147980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.147989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:33.124 [2024-12-11 10:03:26.148471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.124 [2024-12-11 10:03:26.148479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.148502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.148525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.148548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.148570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.148597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.148620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.148642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.148665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.148688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.148711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.148734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.148757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.148779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.148801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.148824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.148848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.125 [2024-12-11 10:03:26.148871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.125 [2024-12-11 10:03:26.148895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.125 [2024-12-11 10:03:26.148918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.125 [2024-12-11 10:03:26.148952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.125 [2024-12-11 10:03:26.148971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.148983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.125 [2024-12-11 10:03:26.148990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.149559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.125 [2024-12-11 10:03:26.149572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.149586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.149593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.149605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.149612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.149624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.149631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.149643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.149650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.149662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.149669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.149681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.149688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.149702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.149709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.149721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.149728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.149740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.149747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.149759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.149766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.149778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.149785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.149797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.149803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.149815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.125 [2024-12-11 10:03:26.149822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:33.125 [2024-12-11 10:03:26.149834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.149841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.149853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.149860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.149872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.149879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.149891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.149898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.149910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.149917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.149929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.149937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.149949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.149956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.149968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.149975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.149987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.149993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.126 [2024-12-11 10:03:26.150378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.126 [2024-12-11 10:03:26.150396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.126 [2024-12-11 10:03:26.150417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.126 [2024-12-11 10:03:26.150436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.126 [2024-12-11 10:03:26.150454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.126 [2024-12-11 10:03:26.150473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.126 [2024-12-11 10:03:26.150492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.126 [2024-12-11 10:03:26.150511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:33.126 [2024-12-11 10:03:26.150523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.126 [2024-12-11 10:03:26.150530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.150542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.127 [2024-12-11 10:03:26.150549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.150561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.150567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.150580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.150586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.150599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.150606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.127 [2024-12-11 10:03:26.151152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.127 [2024-12-11 10:03:26.151446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.127 [2024-12-11 10:03:26.151465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.127 [2024-12-11 10:03:26.151483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.127 [2024-12-11 10:03:26.151502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:33.127 [2024-12-11 10:03:26.151514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.151521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.151533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.151541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.151553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.151560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.151773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.151782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.151796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.151803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.151815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.151822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.151834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.151841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.151853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.151860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.151872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.151879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.151891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.151897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.151909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.151916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.151928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.151935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.151947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.151954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.151966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.151973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.151987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.151994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.128 [2024-12-11 10:03:26.152418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:33.128 [2024-12-11 10:03:26.152430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.152436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.152457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.152476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.152495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.152514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.152533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.129 [2024-12-11 10:03:26.152551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.129 [2024-12-11 10:03:26.152570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.129 [2024-12-11 10:03:26.152589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.129 [2024-12-11 10:03:26.152608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.129 [2024-12-11 10:03:26.152627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.129 [2024-12-11 10:03:26.152646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.129 [2024-12-11 10:03:26.152665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.152694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.152714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.152733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.152752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.152771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.152790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.152808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.152821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.152828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.153292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.153304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.153319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.153326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.153338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.153345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.153357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.153364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.153376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.153383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.153398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.153404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.153416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.153423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.153435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.153442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.153454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.153461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.153473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.153480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.153491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.153498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.153510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.153517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.153529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.153536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.153548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.153555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.153566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.153573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:33.129 [2024-12-11 10:03:26.153585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.129 [2024-12-11 10:03:26.153592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.153604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.153611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.153623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.153631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.153643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.153650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.153662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.153669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.153681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.153688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.153701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.153708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.153719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.153727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.153739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.153745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.153757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.153764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.153776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.153783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.153795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.153801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.153814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.153820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.153832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.153839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.153851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.153861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.153874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.153881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.153893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.153900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.154176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.154186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.154199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.154206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.154224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.130 [2024-12-11 10:03:26.154231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.154243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.130 [2024-12-11 10:03:26.154250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.154262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.130 [2024-12-11 10:03:26.154269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.154281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.130 [2024-12-11 10:03:26.154288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.154300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.130 [2024-12-11 10:03:26.154307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.154319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.130 [2024-12-11 10:03:26.154326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.154338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.130 [2024-12-11 10:03:26.154345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.154357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.130 [2024-12-11 10:03:26.154364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.154378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.130 [2024-12-11 10:03:26.154385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.154397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.130 [2024-12-11 10:03:26.154404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.154416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.130 [2024-12-11 10:03:26.154423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.154435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.130 [2024-12-11 10:03:26.154442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.154454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.130 [2024-12-11 10:03:26.154461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.154473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.130 [2024-12-11 10:03:26.154480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:33.130 [2024-12-11 10:03:26.154492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.130 [2024-12-11 10:03:26.154499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.131 [2024-12-11 10:03:26.154518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.131 [2024-12-11 10:03:26.154536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.131 [2024-12-11 10:03:26.154555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.131 [2024-12-11 10:03:26.154574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.131 [2024-12-11 10:03:26.154593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.131 [2024-12-11 10:03:26.154613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.131 [2024-12-11 10:03:26.154632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.131 [2024-12-11 10:03:26.154651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.131 [2024-12-11 10:03:26.154669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.131 [2024-12-11 10:03:26.154690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.131 [2024-12-11 10:03:26.154709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.131 [2024-12-11 10:03:26.154728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.131 [2024-12-11 10:03:26.154747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.131 [2024-12-11 10:03:26.154766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.131 [2024-12-11 10:03:26.154785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.131 [2024-12-11 10:03:26.154804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.131 [2024-12-11 10:03:26.154823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.131 [2024-12-11 10:03:26.154843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.131 [2024-12-11 10:03:26.154863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.131 [2024-12-11 10:03:26.154882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.131 [2024-12-11 10:03:26.154900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.154913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.131 [2024-12-11 10:03:26.154920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.155202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.131 [2024-12-11 10:03:26.155211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.155230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.131 [2024-12-11 10:03:26.155237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.155249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.131 [2024-12-11 10:03:26.155256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.155269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.131 [2024-12-11 10:03:26.155276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.155288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.131 [2024-12-11 10:03:26.155295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.155307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.131 [2024-12-11 10:03:26.155314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.155326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.131 [2024-12-11 10:03:26.155333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.155345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.131 [2024-12-11 10:03:26.155351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.155365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.131 [2024-12-11 10:03:26.155372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.155384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.131 [2024-12-11 10:03:26.155391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.155403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.131 [2024-12-11 10:03:26.155410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.155423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.131 [2024-12-11 10:03:26.155429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.155442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.131 [2024-12-11 10:03:26.155448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:33.131 [2024-12-11 10:03:26.155460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.155981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.155993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.132 [2024-12-11 10:03:26.156000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.156012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.132 [2024-12-11 10:03:26.156018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.156031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.132 [2024-12-11 10:03:26.156037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:33.132 [2024-12-11 10:03:26.156049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.132 [2024-12-11 10:03:26.156057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.133 [2024-12-11 10:03:26.156076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.133 [2024-12-11 10:03:26.156095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.133 [2024-12-11 10:03:26.156113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.133 [2024-12-11 10:03:26.156132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.156981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.156994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.157001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.157013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.157020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.157032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.157039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.157051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.157058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.157070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.157076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.157089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.157095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.157108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.157115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.157127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.157134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.157146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.157153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.157165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.157172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.157184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.157190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.157203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.157209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.133 [2024-12-11 10:03:26.157227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.133 [2024-12-11 10:03:26.157235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.134 [2024-12-11 10:03:26.157254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.134 [2024-12-11 10:03:26.157273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.134 [2024-12-11 10:03:26.157292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.134 [2024-12-11 10:03:26.157310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.134 [2024-12-11 10:03:26.157330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.134 [2024-12-11 10:03:26.157625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.134 [2024-12-11 10:03:26.157646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.134 [2024-12-11 10:03:26.157665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.134 [2024-12-11 10:03:26.157684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.157703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.157722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.157741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.157763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.157782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.157801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.157819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.157838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.134 [2024-12-11 10:03:26.157857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.157876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.157895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.157913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.157933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.157951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.157971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.157984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.157991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.158003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.158010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.158023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.134 [2024-12-11 10:03:26.158029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.158042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.158048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.158060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.158067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.158079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.158086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.158098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.158105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.158118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.158124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.158137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.158143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.158156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.158162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.158174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.158181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.158193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.134 [2024-12-11 10:03:26.158200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:33.134 [2024-12-11 10:03:26.158213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.135 [2024-12-11 10:03:26.158228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.135 [2024-12-11 10:03:26.158247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.135 [2024-12-11 10:03:26.158266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.135 [2024-12-11 10:03:26.158285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.135 [2024-12-11 10:03:26.158304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.135 [2024-12-11 10:03:26.158323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.158981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.158989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.159001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.159007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.159020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.159027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.159039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.159045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.159058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.159065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.159077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.159084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.159096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.159103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.159115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.159122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.159135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.159141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.159154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.159160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.159173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.159180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:33.135 [2024-12-11 10:03:26.159192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.135 [2024-12-11 10:03:26.159199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.136 [2024-12-11 10:03:26.159507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.136 [2024-12-11 10:03:26.159525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.136 [2024-12-11 10:03:26.159544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.136 [2024-12-11 10:03:26.159562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.136 [2024-12-11 10:03:26.159581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.136 [2024-12-11 10:03:26.159600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.136 [2024-12-11 10:03:26.159618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.159989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.159996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.160011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.160017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.160032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.160038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.160053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.136 [2024-12-11 10:03:26.160060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:33.136 [2024-12-11 10:03:26.160074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.137 [2024-12-11 10:03:26.160788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.137 [2024-12-11 10:03:26.160814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.137 [2024-12-11 10:03:26.160838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.137 [2024-12-11 10:03:26.160861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.137 [2024-12-11 10:03:26.160886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:33.137 [2024-12-11 10:03:26.160902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.160909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.160926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.160933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.160949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.160956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.160973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.160980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.160997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.138 [2024-12-11 10:03:26.161003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.138 [2024-12-11 10:03:26.161276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.138 [2024-12-11 10:03:26.161646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.138 [2024-12-11 10:03:26.161670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:26.161688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.138 [2024-12-11 10:03:26.161696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:33.138 11406.00 IOPS, 44.55 MiB/s [2024-12-11T09:03:42.713Z] 10591.29 IOPS, 41.37 MiB/s [2024-12-11T09:03:42.713Z] 9885.20 IOPS, 38.61 MiB/s [2024-12-11T09:03:42.713Z] 9390.81 IOPS, 36.68 MiB/s [2024-12-11T09:03:42.713Z] 9516.65 IOPS, 37.17 MiB/s [2024-12-11T09:03:42.713Z] 9640.11 IOPS, 37.66 MiB/s [2024-12-11T09:03:42.713Z] 9818.53 IOPS, 38.35 MiB/s [2024-12-11T09:03:42.713Z] 9998.75 IOPS, 39.06 MiB/s [2024-12-11T09:03:42.713Z] 10158.10 IOPS, 39.68 MiB/s [2024-12-11T09:03:42.713Z] 10226.77 IOPS, 39.95 MiB/s [2024-12-11T09:03:42.713Z] 10282.78 IOPS, 40.17 MiB/s [2024-12-11T09:03:42.713Z] 10342.79 IOPS, 40.40 MiB/s [2024-12-11T09:03:42.713Z] 10464.68 IOPS, 40.88 MiB/s [2024-12-11T09:03:42.713Z] 10581.00 IOPS, 41.33 MiB/s [2024-12-11T09:03:42.713Z] [2024-12-11 10:03:39.936363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.138 [2024-12-11 10:03:39.936400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:39.936450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.138 [2024-12-11 10:03:39.936460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:39.936473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.138 [2024-12-11 10:03:39.936481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:33.138 [2024-12-11 10:03:39.936493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.936500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.936524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.936531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.936543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.936550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.936562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.936568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.936580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.936587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.936599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.936605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.936618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.936624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.936641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.936648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.936660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.936667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.936679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.936686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.936697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.936704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.936716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.936723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.937654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.937672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.937688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.937695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.937708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.937715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.937727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.937735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.937747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.937753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.937765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.937772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.937784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.937791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.937804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.139 [2024-12-11 10:03:39.937815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.937828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.937835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.937847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.937854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.937866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.937873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.937885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.937892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.937904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.937911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.937923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.937929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.937942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.937949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.937961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.937967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.937979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.937986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.937998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:33368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.938005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.938017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.938024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.938036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.938045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.938058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.938064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.938076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.938083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.938095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.938102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.938114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:33464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.938122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:33.139 [2024-12-11 10:03:39.938134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.139 [2024-12-11 10:03:39.938141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:33.140 [2024-12-11 10:03:39.938153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.140 [2024-12-11 10:03:39.938160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:33.140 [2024-12-11 10:03:39.938172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.140 [2024-12-11 10:03:39.938179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:33.140 [2024-12-11 10:03:39.938191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.140 [2024-12-11 10:03:39.938198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:33.140 [2024-12-11 10:03:39.938210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.140 [2024-12-11 10:03:39.938222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:33.140 [2024-12-11 10:03:39.938235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.140 [2024-12-11 10:03:39.938241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:33.140 [2024-12-11 10:03:39.938253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.140 [2024-12-11 10:03:39.938260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:33.140 [2024-12-11 10:03:39.938272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.140 [2024-12-11 10:03:39.938279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:33.140 [2024-12-11 10:03:39.938293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.140 [2024-12-11 10:03:39.938300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:33.140 [2024-12-11 10:03:39.938312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.140 [2024-12-11 10:03:39.938319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:33.140 [2024-12-11 10:03:39.938618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.140 [2024-12-11 10:03:39.938628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:33.140 [2024-12-11 10:03:39.938642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.140 [2024-12-11 10:03:39.938649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:33.140 [2024-12-11 10:03:39.938661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.140 [2024-12-11 10:03:39.938669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:33.140 [2024-12-11 10:03:39.938681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.140 [2024-12-11 10:03:39.938688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:33.140 [2024-12-11 10:03:39.938700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.140 [2024-12-11 10:03:39.938707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:33.140 10649.11 IOPS, 41.60 MiB/s [2024-12-11T09:03:42.715Z] 10678.07 IOPS, 41.71 MiB/s [2024-12-11T09:03:42.715Z] Received shutdown signal, test time was about 28.953848 seconds 00:25:33.140 00:25:33.140 Latency(us) 00:25:33.140 [2024-12-11T09:03:42.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.140 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:33.140 Verification LBA range: start 0x0 length 0x4000 00:25:33.140 Nvme0n1 : 28.95 10695.56 41.78 0.00 0.00 11945.42 1263.91 3083812.08 00:25:33.140 [2024-12-11T09:03:42.715Z] =================================================================================================================== 00:25:33.140 [2024-12-11T09:03:42.715Z] Total : 10695.56 41.78 0.00 0.00 11945.42 1263.91 3083812.08 00:25:33.140 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:33.400 rmmod nvme_tcp 00:25:33.400 rmmod nvme_fabrics 00:25:33.400 rmmod nvme_keyring 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 197051 ']' 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 197051 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 197051 ']' 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 197051 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 197051 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 197051' 00:25:33.400 killing process with pid 197051 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 197051 00:25:33.400 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 197051 00:25:33.659 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:33.659 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:33.659 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:33.659 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:33.659 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:33.659 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:33.659 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:33.659 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:33.659 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:33.659 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.659 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:33.659 10:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.566 10:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:35.566 00:25:35.566 real 0m42.085s 00:25:35.566 user 1m52.007s 00:25:35.566 sys 0m12.155s 00:25:35.566 10:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:35.566 10:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:35.566 ************************************ 00:25:35.566 END TEST nvmf_host_multipath_status 00:25:35.566 ************************************ 00:25:35.566 10:03:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:35.566 10:03:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:35.566 10:03:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:35.566 10:03:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.566 ************************************ 00:25:35.566 START TEST nvmf_discovery_remove_ifc 00:25:35.566 ************************************ 00:25:35.566 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:35.826 * Looking for test storage... 00:25:35.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:35.826 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:35.826 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:25:35.826 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:35.826 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:35.826 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:35.826 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:35.826 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:35.826 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:35.826 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:35.826 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:35.826 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:35.826 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:35.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.827 --rc genhtml_branch_coverage=1 00:25:35.827 --rc genhtml_function_coverage=1 00:25:35.827 --rc genhtml_legend=1 00:25:35.827 --rc geninfo_all_blocks=1 00:25:35.827 --rc geninfo_unexecuted_blocks=1 00:25:35.827 00:25:35.827 ' 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:35.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.827 --rc genhtml_branch_coverage=1 00:25:35.827 --rc genhtml_function_coverage=1 00:25:35.827 --rc genhtml_legend=1 00:25:35.827 --rc geninfo_all_blocks=1 00:25:35.827 --rc geninfo_unexecuted_blocks=1 00:25:35.827 00:25:35.827 ' 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:35.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.827 --rc genhtml_branch_coverage=1 00:25:35.827 --rc genhtml_function_coverage=1 00:25:35.827 --rc genhtml_legend=1 00:25:35.827 --rc geninfo_all_blocks=1 00:25:35.827 --rc geninfo_unexecuted_blocks=1 00:25:35.827 00:25:35.827 ' 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:35.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.827 --rc genhtml_branch_coverage=1 00:25:35.827 --rc genhtml_function_coverage=1 00:25:35.827 --rc genhtml_legend=1 00:25:35.827 --rc geninfo_all_blocks=1 00:25:35.827 --rc geninfo_unexecuted_blocks=1 00:25:35.827 00:25:35.827 ' 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:35.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:35.827 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:35.828 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:35.828 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:35.828 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:35.828 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:35.828 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:35.828 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.828 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:35.828 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:35.828 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:35.828 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.828 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.828 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.828 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:35.828 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:35.828 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:35.828 10:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:42.400 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:42.400 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:42.401 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:42.401 Found net devices under 0000:af:00.0: cvl_0_0 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:42.401 Found net devices under 0000:af:00.1: cvl_0_1 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:42.401 10:03:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:42.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:42.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:25:42.661 00:25:42.661 --- 10.0.0.2 ping statistics --- 00:25:42.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.661 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:42.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:42.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:25:42.661 00:25:42.661 --- 10.0.0.1 ping statistics --- 00:25:42.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.661 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=206560 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 206560 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 206560 ']' 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:42.661 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.921 [2024-12-11 10:03:52.241077] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:25:42.921 [2024-12-11 10:03:52.241128] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.921 [2024-12-11 10:03:52.325556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.921 [2024-12-11 10:03:52.365254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.921 [2024-12-11 10:03:52.365286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.921 [2024-12-11 10:03:52.365293] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.921 [2024-12-11 10:03:52.365299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.921 [2024-12-11 10:03:52.365304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.921 [2024-12-11 10:03:52.365860] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.921 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:42.921 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:42.921 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:42.921 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:42.921 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.921 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.921 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:43.180 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.180 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:43.180 [2024-12-11 10:03:52.509258] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.180 [2024-12-11 10:03:52.517415] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:43.180 null0 00:25:43.180 [2024-12-11 10:03:52.549409] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.180 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.180 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=206582 00:25:43.180 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:43.180 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 206582 /tmp/host.sock 00:25:43.180 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 206582 ']' 00:25:43.180 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:43.180 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:43.180 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:43.180 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:43.180 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:43.180 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:43.180 [2024-12-11 10:03:52.615594] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:25:43.180 [2024-12-11 10:03:52.615634] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid206582 ] 00:25:43.180 [2024-12-11 10:03:52.696204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.180 [2024-12-11 10:03:52.735406] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.439 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.439 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:43.439 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:43.439 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:43.439 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.439 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:43.439 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.439 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:43.439 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.439 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:43.439 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.439 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:43.439 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.439 10:03:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:44.374 [2024-12-11 10:03:53.926370] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:44.374 [2024-12-11 10:03:53.926392] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:44.374 [2024-12-11 10:03:53.926405] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:44.632 [2024-12-11 10:03:54.014666] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:44.890 [2024-12-11 10:03:54.239769] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:44.890 [2024-12-11 10:03:54.240517] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d26190:1 started. 00:25:44.890 [2024-12-11 10:03:54.241837] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:44.890 [2024-12-11 10:03:54.241878] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:44.890 [2024-12-11 10:03:54.241897] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:44.890 [2024-12-11 10:03:54.241909] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:44.890 [2024-12-11 10:03:54.241926] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:44.890 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.890 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:44.890 [2024-12-11 10:03:54.245056] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d26190 was disconnected and freed. delete nvme_qpair. 00:25:44.890 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:44.890 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.890 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:44.890 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.891 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:44.891 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:44.891 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:44.891 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.891 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:44.891 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:44.891 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:44.891 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:44.891 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:44.891 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.891 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:44.891 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.891 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:44.891 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:44.891 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:44.891 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.891 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:44.891 10:03:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:46.269 10:03:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:46.269 10:03:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.269 10:03:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:46.269 10:03:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.269 10:03:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:46.269 10:03:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:46.269 10:03:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:46.269 10:03:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.269 10:03:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:46.269 10:03:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:47.206 10:03:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:47.206 10:03:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:47.206 10:03:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:47.206 10:03:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.206 10:03:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:47.206 10:03:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:47.206 10:03:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:47.206 10:03:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.206 10:03:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:47.206 10:03:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:48.143 10:03:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:48.143 10:03:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:48.143 10:03:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:48.143 10:03:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.143 10:03:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:48.143 10:03:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.143 10:03:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:48.143 10:03:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.143 10:03:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:48.143 10:03:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:49.081 10:03:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:49.081 10:03:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.081 10:03:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:49.081 10:03:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.081 10:03:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:49.081 10:03:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:49.081 10:03:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:49.081 10:03:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.081 10:03:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:49.081 10:03:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:50.458 10:03:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:50.458 10:03:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.458 10:03:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:50.458 10:03:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.458 10:03:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:50.458 10:03:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.458 10:03:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:50.458 10:03:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.458 [2024-12-11 10:03:59.683410] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:50.458 [2024-12-11 10:03:59.683442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.458 [2024-12-11 10:03:59.683452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.458 [2024-12-11 10:03:59.683477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.458 [2024-12-11 10:03:59.683484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.458 [2024-12-11 10:03:59.683492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.458 [2024-12-11 10:03:59.683499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.458 [2024-12-11 10:03:59.683506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.458 [2024-12-11 10:03:59.683513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.458 [2024-12-11 10:03:59.683520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.458 [2024-12-11 10:03:59.683526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.458 [2024-12-11 10:03:59.683533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02950 is same with the state(6) to be set 00:25:50.458 [2024-12-11 10:03:59.693432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d02950 (9): Bad file descriptor 00:25:50.458 10:03:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:50.458 10:03:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:50.458 [2024-12-11 10:03:59.703468] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:50.459 [2024-12-11 10:03:59.703479] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:50.459 [2024-12-11 10:03:59.703485] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:50.459 [2024-12-11 10:03:59.703493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:50.459 [2024-12-11 10:03:59.703510] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:51.394 10:04:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:51.395 10:04:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.395 10:04:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:51.395 10:04:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.395 10:04:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:51.395 10:04:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.395 10:04:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:51.395 [2024-12-11 10:04:00.730318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:51.395 [2024-12-11 10:04:00.730405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d02950 with addr=10.0.0.2, port=4420 00:25:51.395 [2024-12-11 10:04:00.730438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02950 is same with the state(6) to be set 00:25:51.395 [2024-12-11 10:04:00.730495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d02950 (9): Bad file descriptor 00:25:51.395 [2024-12-11 10:04:00.731462] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:51.395 [2024-12-11 10:04:00.731530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:51.395 [2024-12-11 10:04:00.731553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:51.395 [2024-12-11 10:04:00.731576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:51.395 [2024-12-11 10:04:00.731596] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:51.395 [2024-12-11 10:04:00.731612] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:51.395 [2024-12-11 10:04:00.731626] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:51.395 [2024-12-11 10:04:00.731647] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:51.395 [2024-12-11 10:04:00.731661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:51.395 10:04:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.395 10:04:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:51.395 10:04:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:52.330 [2024-12-11 10:04:01.734177] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:52.330 [2024-12-11 10:04:01.734198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:52.330 [2024-12-11 10:04:01.734208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:52.330 [2024-12-11 10:04:01.734214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:52.330 [2024-12-11 10:04:01.734226] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:52.330 [2024-12-11 10:04:01.734232] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:52.330 [2024-12-11 10:04:01.734237] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:52.330 [2024-12-11 10:04:01.734241] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:52.330 [2024-12-11 10:04:01.734263] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:52.330 [2024-12-11 10:04:01.734284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.331 [2024-12-11 10:04:01.734297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.331 [2024-12-11 10:04:01.734307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.331 [2024-12-11 10:04:01.734314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.331 [2024-12-11 10:04:01.734320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.331 [2024-12-11 10:04:01.734327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.331 [2024-12-11 10:04:01.734334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.331 [2024-12-11 10:04:01.734340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.331 [2024-12-11 10:04:01.734347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.331 [2024-12-11 10:04:01.734353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.331 [2024-12-11 10:04:01.734359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:52.331 [2024-12-11 10:04:01.734659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf1c60 (9): Bad file descriptor 00:25:52.331 [2024-12-11 10:04:01.735669] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:52.331 [2024-12-11 10:04:01.735680] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:52.331 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:52.331 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.331 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:52.331 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.331 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:52.331 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:52.331 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:52.331 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.331 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:52.331 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:52.331 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.589 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:52.589 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:52.589 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.589 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:52.589 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.589 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:52.589 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:52.589 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:52.589 10:04:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.589 10:04:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:52.589 10:04:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:53.525 10:04:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:53.525 10:04:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.525 10:04:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:53.525 10:04:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.525 10:04:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:53.525 10:04:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.525 10:04:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:53.525 10:04:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.525 10:04:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:53.525 10:04:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:54.461 [2024-12-11 10:04:03.750754] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:54.461 [2024-12-11 10:04:03.750770] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:54.461 [2024-12-11 10:04:03.750782] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:54.461 [2024-12-11 10:04:03.877160] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:54.720 [2024-12-11 10:04:04.053097] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:54.720 [2024-12-11 10:04:04.053693] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1cfd170:1 started. 00:25:54.720 [2024-12-11 10:04:04.054693] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:54.720 [2024-12-11 10:04:04.054723] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:54.720 [2024-12-11 10:04:04.054741] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:54.720 [2024-12-11 10:04:04.054755] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:54.720 [2024-12-11 10:04:04.054762] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:54.720 [2024-12-11 10:04:04.060149] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1cfd170 was disconnected and freed. delete nvme_qpair. 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 206582 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 206582 ']' 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 206582 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 206582 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 206582' 00:25:54.720 killing process with pid 206582 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 206582 00:25:54.720 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 206582 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:54.979 rmmod nvme_tcp 00:25:54.979 rmmod nvme_fabrics 00:25:54.979 rmmod nvme_keyring 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 206560 ']' 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 206560 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 206560 ']' 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 206560 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 206560 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 206560' 00:25:54.979 killing process with pid 206560 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 206560 00:25:54.979 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 206560 00:25:55.238 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:55.238 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:55.238 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:55.238 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:55.238 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:55.238 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:55.238 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:55.238 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:55.238 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:55.238 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.238 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:55.238 10:04:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.142 10:04:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:57.142 00:25:57.142 real 0m21.574s 00:25:57.142 user 0m25.369s 00:25:57.142 sys 0m6.373s 00:25:57.143 10:04:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:57.143 10:04:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:57.143 ************************************ 00:25:57.143 END TEST nvmf_discovery_remove_ifc 00:25:57.143 ************************************ 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.402 ************************************ 00:25:57.402 START TEST nvmf_identify_kernel_target 00:25:57.402 ************************************ 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:57.402 * Looking for test storage... 00:25:57.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:57.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.402 --rc genhtml_branch_coverage=1 00:25:57.402 --rc genhtml_function_coverage=1 00:25:57.402 --rc genhtml_legend=1 00:25:57.402 --rc geninfo_all_blocks=1 00:25:57.402 --rc geninfo_unexecuted_blocks=1 00:25:57.402 00:25:57.402 ' 00:25:57.402 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:57.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.402 --rc genhtml_branch_coverage=1 00:25:57.403 --rc genhtml_function_coverage=1 00:25:57.403 --rc genhtml_legend=1 00:25:57.403 --rc geninfo_all_blocks=1 00:25:57.403 --rc geninfo_unexecuted_blocks=1 00:25:57.403 00:25:57.403 ' 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:57.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.403 --rc genhtml_branch_coverage=1 00:25:57.403 --rc genhtml_function_coverage=1 00:25:57.403 --rc genhtml_legend=1 00:25:57.403 --rc geninfo_all_blocks=1 00:25:57.403 --rc geninfo_unexecuted_blocks=1 00:25:57.403 00:25:57.403 ' 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:57.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.403 --rc genhtml_branch_coverage=1 00:25:57.403 --rc genhtml_function_coverage=1 00:25:57.403 --rc genhtml_legend=1 00:25:57.403 --rc geninfo_all_blocks=1 00:25:57.403 --rc geninfo_unexecuted_blocks=1 00:25:57.403 00:25:57.403 ' 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.403 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:57.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:57.663 10:04:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:04.342 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:04.343 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:04.343 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:04.343 Found net devices under 0000:af:00.0: cvl_0_0 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:04.343 Found net devices under 0000:af:00.1: cvl_0_1 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:04.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:26:04.343 00:26:04.343 --- 10.0.0.2 ping statistics --- 00:26:04.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.343 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:04.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:26:04.343 00:26:04.343 --- 10.0.0.1 ping statistics --- 00:26:04.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.343 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:26:04.343 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:04.344 10:04:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:07.634 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:26:07.634 Waiting for block devices as requested 00:26:07.634 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:07.634 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:07.634 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:07.893 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:07.893 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:07.893 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:08.152 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:08.152 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:08.152 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:08.411 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:08.411 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:08.411 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:08.411 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:08.670 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:08.670 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:08.670 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:08.930 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:08.930 No valid GPT data, bailing 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:26:08.930 No valid GPT data, bailing 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # continue 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:08.930 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:09.190 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:09.190 00:26:09.190 Discovery Log Number of Records 2, Generation counter 2 00:26:09.190 =====Discovery Log Entry 0====== 00:26:09.190 trtype: tcp 00:26:09.190 adrfam: ipv4 00:26:09.190 subtype: current discovery subsystem 00:26:09.190 treq: not specified, sq flow control disable supported 00:26:09.190 portid: 1 00:26:09.190 trsvcid: 4420 00:26:09.190 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:09.190 traddr: 10.0.0.1 00:26:09.190 eflags: none 00:26:09.190 sectype: none 00:26:09.190 =====Discovery Log Entry 1====== 00:26:09.190 trtype: tcp 00:26:09.190 adrfam: ipv4 00:26:09.190 subtype: nvme subsystem 00:26:09.190 treq: not specified, sq flow control disable supported 00:26:09.190 portid: 1 00:26:09.190 trsvcid: 4420 00:26:09.190 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:09.190 traddr: 10.0.0.1 00:26:09.190 eflags: none 00:26:09.190 sectype: none 00:26:09.190 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:09.190 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:09.190 ===================================================== 00:26:09.190 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:09.190 ===================================================== 00:26:09.190 Controller Capabilities/Features 00:26:09.190 ================================ 00:26:09.190 Vendor ID: 0000 00:26:09.190 Subsystem Vendor ID: 0000 00:26:09.190 Serial Number: 5f34d9ea87297bde40b0 00:26:09.190 Model Number: Linux 00:26:09.190 Firmware Version: 6.8.9-20 00:26:09.190 Recommended Arb Burst: 0 00:26:09.190 IEEE OUI Identifier: 00 00 00 00:26:09.190 Multi-path I/O 00:26:09.190 May have multiple subsystem ports: No 00:26:09.190 May have multiple controllers: No 00:26:09.190 Associated with SR-IOV VF: No 00:26:09.190 Max Data Transfer Size: Unlimited 00:26:09.190 Max Number of Namespaces: 0 00:26:09.190 Max Number of I/O Queues: 1024 00:26:09.190 NVMe Specification Version (VS): 1.3 00:26:09.190 NVMe Specification Version (Identify): 1.3 00:26:09.190 Maximum Queue Entries: 1024 00:26:09.190 Contiguous Queues Required: No 00:26:09.190 Arbitration Mechanisms Supported 00:26:09.190 Weighted Round Robin: Not Supported 00:26:09.190 Vendor Specific: Not Supported 00:26:09.190 Reset Timeout: 7500 ms 00:26:09.190 Doorbell Stride: 4 bytes 00:26:09.190 NVM Subsystem Reset: Not Supported 00:26:09.190 Command Sets Supported 00:26:09.190 NVM Command Set: Supported 00:26:09.190 Boot Partition: Not Supported 00:26:09.190 Memory Page Size Minimum: 4096 bytes 00:26:09.190 Memory Page Size Maximum: 4096 bytes 00:26:09.190 Persistent Memory Region: Not Supported 00:26:09.190 Optional Asynchronous Events Supported 00:26:09.190 Namespace Attribute Notices: Not Supported 00:26:09.190 Firmware Activation Notices: Not Supported 00:26:09.190 ANA Change Notices: Not Supported 00:26:09.190 PLE Aggregate Log Change Notices: Not Supported 00:26:09.190 LBA Status Info Alert Notices: Not Supported 00:26:09.190 EGE Aggregate Log Change Notices: Not Supported 00:26:09.190 Normal NVM Subsystem Shutdown event: Not Supported 00:26:09.190 Zone Descriptor Change Notices: Not Supported 00:26:09.190 Discovery Log Change Notices: Supported 00:26:09.190 Controller Attributes 00:26:09.190 128-bit Host Identifier: Not Supported 00:26:09.190 Non-Operational Permissive Mode: Not Supported 00:26:09.190 NVM Sets: Not Supported 00:26:09.190 Read Recovery Levels: Not Supported 00:26:09.190 Endurance Groups: Not Supported 00:26:09.190 Predictable Latency Mode: Not Supported 00:26:09.190 Traffic Based Keep ALive: Not Supported 00:26:09.190 Namespace Granularity: Not Supported 00:26:09.190 SQ Associations: Not Supported 00:26:09.190 UUID List: Not Supported 00:26:09.190 Multi-Domain Subsystem: Not Supported 00:26:09.190 Fixed Capacity Management: Not Supported 00:26:09.190 Variable Capacity Management: Not Supported 00:26:09.190 Delete Endurance Group: Not Supported 00:26:09.190 Delete NVM Set: Not Supported 00:26:09.190 Extended LBA Formats Supported: Not Supported 00:26:09.190 Flexible Data Placement Supported: Not Supported 00:26:09.190 00:26:09.190 Controller Memory Buffer Support 00:26:09.190 ================================ 00:26:09.190 Supported: No 00:26:09.190 00:26:09.190 Persistent Memory Region Support 00:26:09.190 ================================ 00:26:09.190 Supported: No 00:26:09.190 00:26:09.190 Admin Command Set Attributes 00:26:09.190 ============================ 00:26:09.190 Security Send/Receive: Not Supported 00:26:09.190 Format NVM: Not Supported 00:26:09.190 Firmware Activate/Download: Not Supported 00:26:09.190 Namespace Management: Not Supported 00:26:09.190 Device Self-Test: Not Supported 00:26:09.190 Directives: Not Supported 00:26:09.190 NVMe-MI: Not Supported 00:26:09.190 Virtualization Management: Not Supported 00:26:09.190 Doorbell Buffer Config: Not Supported 00:26:09.190 Get LBA Status Capability: Not Supported 00:26:09.190 Command & Feature Lockdown Capability: Not Supported 00:26:09.190 Abort Command Limit: 1 00:26:09.190 Async Event Request Limit: 1 00:26:09.190 Number of Firmware Slots: N/A 00:26:09.190 Firmware Slot 1 Read-Only: N/A 00:26:09.190 Firmware Activation Without Reset: N/A 00:26:09.190 Multiple Update Detection Support: N/A 00:26:09.190 Firmware Update Granularity: No Information Provided 00:26:09.190 Per-Namespace SMART Log: No 00:26:09.190 Asymmetric Namespace Access Log Page: Not Supported 00:26:09.190 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:09.190 Command Effects Log Page: Not Supported 00:26:09.190 Get Log Page Extended Data: Supported 00:26:09.190 Telemetry Log Pages: Not Supported 00:26:09.190 Persistent Event Log Pages: Not Supported 00:26:09.190 Supported Log Pages Log Page: May Support 00:26:09.190 Commands Supported & Effects Log Page: Not Supported 00:26:09.190 Feature Identifiers & Effects Log Page:May Support 00:26:09.190 NVMe-MI Commands & Effects Log Page: May Support 00:26:09.191 Data Area 4 for Telemetry Log: Not Supported 00:26:09.191 Error Log Page Entries Supported: 1 00:26:09.191 Keep Alive: Not Supported 00:26:09.191 00:26:09.191 NVM Command Set Attributes 00:26:09.191 ========================== 00:26:09.191 Submission Queue Entry Size 00:26:09.191 Max: 1 00:26:09.191 Min: 1 00:26:09.191 Completion Queue Entry Size 00:26:09.191 Max: 1 00:26:09.191 Min: 1 00:26:09.191 Number of Namespaces: 0 00:26:09.191 Compare Command: Not Supported 00:26:09.191 Write Uncorrectable Command: Not Supported 00:26:09.191 Dataset Management Command: Not Supported 00:26:09.191 Write Zeroes Command: Not Supported 00:26:09.191 Set Features Save Field: Not Supported 00:26:09.191 Reservations: Not Supported 00:26:09.191 Timestamp: Not Supported 00:26:09.191 Copy: Not Supported 00:26:09.191 Volatile Write Cache: Not Present 00:26:09.191 Atomic Write Unit (Normal): 1 00:26:09.191 Atomic Write Unit (PFail): 1 00:26:09.191 Atomic Compare & Write Unit: 1 00:26:09.191 Fused Compare & Write: Not Supported 00:26:09.191 Scatter-Gather List 00:26:09.191 SGL Command Set: Supported 00:26:09.191 SGL Keyed: Not Supported 00:26:09.191 SGL Bit Bucket Descriptor: Not Supported 00:26:09.191 SGL Metadata Pointer: Not Supported 00:26:09.191 Oversized SGL: Not Supported 00:26:09.191 SGL Metadata Address: Not Supported 00:26:09.191 SGL Offset: Supported 00:26:09.191 Transport SGL Data Block: Not Supported 00:26:09.191 Replay Protected Memory Block: Not Supported 00:26:09.191 00:26:09.191 Firmware Slot Information 00:26:09.191 ========================= 00:26:09.191 Active slot: 0 00:26:09.191 00:26:09.191 00:26:09.191 Error Log 00:26:09.191 ========= 00:26:09.191 00:26:09.191 Active Namespaces 00:26:09.191 ================= 00:26:09.191 Discovery Log Page 00:26:09.191 ================== 00:26:09.191 Generation Counter: 2 00:26:09.191 Number of Records: 2 00:26:09.191 Record Format: 0 00:26:09.191 00:26:09.191 Discovery Log Entry 0 00:26:09.191 ---------------------- 00:26:09.191 Transport Type: 3 (TCP) 00:26:09.191 Address Family: 1 (IPv4) 00:26:09.191 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:09.191 Entry Flags: 00:26:09.191 Duplicate Returned Information: 0 00:26:09.191 Explicit Persistent Connection Support for Discovery: 0 00:26:09.191 Transport Requirements: 00:26:09.191 Secure Channel: Not Specified 00:26:09.191 Port ID: 1 (0x0001) 00:26:09.191 Controller ID: 65535 (0xffff) 00:26:09.191 Admin Max SQ Size: 32 00:26:09.191 Transport Service Identifier: 4420 00:26:09.191 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:09.191 Transport Address: 10.0.0.1 00:26:09.191 Discovery Log Entry 1 00:26:09.191 ---------------------- 00:26:09.191 Transport Type: 3 (TCP) 00:26:09.191 Address Family: 1 (IPv4) 00:26:09.191 Subsystem Type: 2 (NVM Subsystem) 00:26:09.191 Entry Flags: 00:26:09.191 Duplicate Returned Information: 0 00:26:09.191 Explicit Persistent Connection Support for Discovery: 0 00:26:09.191 Transport Requirements: 00:26:09.191 Secure Channel: Not Specified 00:26:09.191 Port ID: 1 (0x0001) 00:26:09.191 Controller ID: 65535 (0xffff) 00:26:09.191 Admin Max SQ Size: 32 00:26:09.191 Transport Service Identifier: 4420 00:26:09.191 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:09.191 Transport Address: 10.0.0.1 00:26:09.191 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:09.451 get_feature(0x01) failed 00:26:09.451 get_feature(0x02) failed 00:26:09.451 get_feature(0x04) failed 00:26:09.451 ===================================================== 00:26:09.451 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:09.451 ===================================================== 00:26:09.451 Controller Capabilities/Features 00:26:09.451 ================================ 00:26:09.451 Vendor ID: 0000 00:26:09.451 Subsystem Vendor ID: 0000 00:26:09.451 Serial Number: d113d71e586decac6961 00:26:09.451 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:09.451 Firmware Version: 6.8.9-20 00:26:09.451 Recommended Arb Burst: 6 00:26:09.451 IEEE OUI Identifier: 00 00 00 00:26:09.451 Multi-path I/O 00:26:09.451 May have multiple subsystem ports: Yes 00:26:09.451 May have multiple controllers: Yes 00:26:09.451 Associated with SR-IOV VF: No 00:26:09.451 Max Data Transfer Size: Unlimited 00:26:09.451 Max Number of Namespaces: 1024 00:26:09.451 Max Number of I/O Queues: 128 00:26:09.451 NVMe Specification Version (VS): 1.3 00:26:09.451 NVMe Specification Version (Identify): 1.3 00:26:09.451 Maximum Queue Entries: 1024 00:26:09.451 Contiguous Queues Required: No 00:26:09.451 Arbitration Mechanisms Supported 00:26:09.451 Weighted Round Robin: Not Supported 00:26:09.451 Vendor Specific: Not Supported 00:26:09.451 Reset Timeout: 7500 ms 00:26:09.451 Doorbell Stride: 4 bytes 00:26:09.451 NVM Subsystem Reset: Not Supported 00:26:09.451 Command Sets Supported 00:26:09.451 NVM Command Set: Supported 00:26:09.451 Boot Partition: Not Supported 00:26:09.451 Memory Page Size Minimum: 4096 bytes 00:26:09.451 Memory Page Size Maximum: 4096 bytes 00:26:09.451 Persistent Memory Region: Not Supported 00:26:09.451 Optional Asynchronous Events Supported 00:26:09.451 Namespace Attribute Notices: Supported 00:26:09.451 Firmware Activation Notices: Not Supported 00:26:09.451 ANA Change Notices: Supported 00:26:09.451 PLE Aggregate Log Change Notices: Not Supported 00:26:09.451 LBA Status Info Alert Notices: Not Supported 00:26:09.451 EGE Aggregate Log Change Notices: Not Supported 00:26:09.451 Normal NVM Subsystem Shutdown event: Not Supported 00:26:09.451 Zone Descriptor Change Notices: Not Supported 00:26:09.451 Discovery Log Change Notices: Not Supported 00:26:09.451 Controller Attributes 00:26:09.451 128-bit Host Identifier: Supported 00:26:09.451 Non-Operational Permissive Mode: Not Supported 00:26:09.451 NVM Sets: Not Supported 00:26:09.451 Read Recovery Levels: Not Supported 00:26:09.451 Endurance Groups: Not Supported 00:26:09.451 Predictable Latency Mode: Not Supported 00:26:09.451 Traffic Based Keep ALive: Supported 00:26:09.451 Namespace Granularity: Not Supported 00:26:09.451 SQ Associations: Not Supported 00:26:09.451 UUID List: Not Supported 00:26:09.451 Multi-Domain Subsystem: Not Supported 00:26:09.451 Fixed Capacity Management: Not Supported 00:26:09.451 Variable Capacity Management: Not Supported 00:26:09.451 Delete Endurance Group: Not Supported 00:26:09.451 Delete NVM Set: Not Supported 00:26:09.451 Extended LBA Formats Supported: Not Supported 00:26:09.451 Flexible Data Placement Supported: Not Supported 00:26:09.451 00:26:09.451 Controller Memory Buffer Support 00:26:09.451 ================================ 00:26:09.451 Supported: No 00:26:09.451 00:26:09.451 Persistent Memory Region Support 00:26:09.451 ================================ 00:26:09.451 Supported: No 00:26:09.451 00:26:09.451 Admin Command Set Attributes 00:26:09.451 ============================ 00:26:09.451 Security Send/Receive: Not Supported 00:26:09.451 Format NVM: Not Supported 00:26:09.451 Firmware Activate/Download: Not Supported 00:26:09.451 Namespace Management: Not Supported 00:26:09.451 Device Self-Test: Not Supported 00:26:09.451 Directives: Not Supported 00:26:09.451 NVMe-MI: Not Supported 00:26:09.451 Virtualization Management: Not Supported 00:26:09.451 Doorbell Buffer Config: Not Supported 00:26:09.451 Get LBA Status Capability: Not Supported 00:26:09.451 Command & Feature Lockdown Capability: Not Supported 00:26:09.451 Abort Command Limit: 4 00:26:09.451 Async Event Request Limit: 4 00:26:09.451 Number of Firmware Slots: N/A 00:26:09.451 Firmware Slot 1 Read-Only: N/A 00:26:09.451 Firmware Activation Without Reset: N/A 00:26:09.451 Multiple Update Detection Support: N/A 00:26:09.451 Firmware Update Granularity: No Information Provided 00:26:09.451 Per-Namespace SMART Log: Yes 00:26:09.451 Asymmetric Namespace Access Log Page: Supported 00:26:09.451 ANA Transition Time : 10 sec 00:26:09.451 00:26:09.451 Asymmetric Namespace Access Capabilities 00:26:09.451 ANA Optimized State : Supported 00:26:09.451 ANA Non-Optimized State : Supported 00:26:09.451 ANA Inaccessible State : Supported 00:26:09.451 ANA Persistent Loss State : Supported 00:26:09.451 ANA Change State : Supported 00:26:09.451 ANAGRPID is not changed : No 00:26:09.452 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:09.452 00:26:09.452 ANA Group Identifier Maximum : 128 00:26:09.452 Number of ANA Group Identifiers : 128 00:26:09.452 Max Number of Allowed Namespaces : 1024 00:26:09.452 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:09.452 Command Effects Log Page: Supported 00:26:09.452 Get Log Page Extended Data: Supported 00:26:09.452 Telemetry Log Pages: Not Supported 00:26:09.452 Persistent Event Log Pages: Not Supported 00:26:09.452 Supported Log Pages Log Page: May Support 00:26:09.452 Commands Supported & Effects Log Page: Not Supported 00:26:09.452 Feature Identifiers & Effects Log Page:May Support 00:26:09.452 NVMe-MI Commands & Effects Log Page: May Support 00:26:09.452 Data Area 4 for Telemetry Log: Not Supported 00:26:09.452 Error Log Page Entries Supported: 128 00:26:09.452 Keep Alive: Supported 00:26:09.452 Keep Alive Granularity: 1000 ms 00:26:09.452 00:26:09.452 NVM Command Set Attributes 00:26:09.452 ========================== 00:26:09.452 Submission Queue Entry Size 00:26:09.452 Max: 64 00:26:09.452 Min: 64 00:26:09.452 Completion Queue Entry Size 00:26:09.452 Max: 16 00:26:09.452 Min: 16 00:26:09.452 Number of Namespaces: 1024 00:26:09.452 Compare Command: Not Supported 00:26:09.452 Write Uncorrectable Command: Not Supported 00:26:09.452 Dataset Management Command: Supported 00:26:09.452 Write Zeroes Command: Supported 00:26:09.452 Set Features Save Field: Not Supported 00:26:09.452 Reservations: Not Supported 00:26:09.452 Timestamp: Not Supported 00:26:09.452 Copy: Not Supported 00:26:09.452 Volatile Write Cache: Present 00:26:09.452 Atomic Write Unit (Normal): 1 00:26:09.452 Atomic Write Unit (PFail): 1 00:26:09.452 Atomic Compare & Write Unit: 1 00:26:09.452 Fused Compare & Write: Not Supported 00:26:09.452 Scatter-Gather List 00:26:09.452 SGL Command Set: Supported 00:26:09.452 SGL Keyed: Not Supported 00:26:09.452 SGL Bit Bucket Descriptor: Not Supported 00:26:09.452 SGL Metadata Pointer: Not Supported 00:26:09.452 Oversized SGL: Not Supported 00:26:09.452 SGL Metadata Address: Not Supported 00:26:09.452 SGL Offset: Supported 00:26:09.452 Transport SGL Data Block: Not Supported 00:26:09.452 Replay Protected Memory Block: Not Supported 00:26:09.452 00:26:09.452 Firmware Slot Information 00:26:09.452 ========================= 00:26:09.452 Active slot: 0 00:26:09.452 00:26:09.452 Asymmetric Namespace Access 00:26:09.452 =========================== 00:26:09.452 Change Count : 0 00:26:09.452 Number of ANA Group Descriptors : 1 00:26:09.452 ANA Group Descriptor : 0 00:26:09.452 ANA Group ID : 1 00:26:09.452 Number of NSID Values : 1 00:26:09.452 Change Count : 0 00:26:09.452 ANA State : 1 00:26:09.452 Namespace Identifier : 1 00:26:09.452 00:26:09.452 Commands Supported and Effects 00:26:09.452 ============================== 00:26:09.452 Admin Commands 00:26:09.452 -------------- 00:26:09.452 Get Log Page (02h): Supported 00:26:09.452 Identify (06h): Supported 00:26:09.452 Abort (08h): Supported 00:26:09.452 Set Features (09h): Supported 00:26:09.452 Get Features (0Ah): Supported 00:26:09.452 Asynchronous Event Request (0Ch): Supported 00:26:09.452 Keep Alive (18h): Supported 00:26:09.452 I/O Commands 00:26:09.452 ------------ 00:26:09.452 Flush (00h): Supported 00:26:09.452 Write (01h): Supported LBA-Change 00:26:09.452 Read (02h): Supported 00:26:09.452 Write Zeroes (08h): Supported LBA-Change 00:26:09.452 Dataset Management (09h): Supported 00:26:09.452 00:26:09.452 Error Log 00:26:09.452 ========= 00:26:09.452 Entry: 0 00:26:09.452 Error Count: 0x3 00:26:09.452 Submission Queue Id: 0x0 00:26:09.452 Command Id: 0x5 00:26:09.452 Phase Bit: 0 00:26:09.452 Status Code: 0x2 00:26:09.452 Status Code Type: 0x0 00:26:09.452 Do Not Retry: 1 00:26:09.452 Error Location: 0x28 00:26:09.452 LBA: 0x0 00:26:09.452 Namespace: 0x0 00:26:09.452 Vendor Log Page: 0x0 00:26:09.452 ----------- 00:26:09.452 Entry: 1 00:26:09.452 Error Count: 0x2 00:26:09.452 Submission Queue Id: 0x0 00:26:09.452 Command Id: 0x5 00:26:09.452 Phase Bit: 0 00:26:09.452 Status Code: 0x2 00:26:09.452 Status Code Type: 0x0 00:26:09.452 Do Not Retry: 1 00:26:09.452 Error Location: 0x28 00:26:09.452 LBA: 0x0 00:26:09.452 Namespace: 0x0 00:26:09.452 Vendor Log Page: 0x0 00:26:09.452 ----------- 00:26:09.452 Entry: 2 00:26:09.452 Error Count: 0x1 00:26:09.452 Submission Queue Id: 0x0 00:26:09.452 Command Id: 0x4 00:26:09.452 Phase Bit: 0 00:26:09.452 Status Code: 0x2 00:26:09.452 Status Code Type: 0x0 00:26:09.452 Do Not Retry: 1 00:26:09.452 Error Location: 0x28 00:26:09.452 LBA: 0x0 00:26:09.452 Namespace: 0x0 00:26:09.452 Vendor Log Page: 0x0 00:26:09.452 00:26:09.452 Number of Queues 00:26:09.452 ================ 00:26:09.452 Number of I/O Submission Queues: 128 00:26:09.452 Number of I/O Completion Queues: 128 00:26:09.452 00:26:09.452 ZNS Specific Controller Data 00:26:09.452 ============================ 00:26:09.452 Zone Append Size Limit: 0 00:26:09.452 00:26:09.452 00:26:09.452 Active Namespaces 00:26:09.452 ================= 00:26:09.452 get_feature(0x05) failed 00:26:09.452 Namespace ID:1 00:26:09.452 Command Set Identifier: NVM (00h) 00:26:09.452 Deallocate: Supported 00:26:09.452 Deallocated/Unwritten Error: Not Supported 00:26:09.452 Deallocated Read Value: Unknown 00:26:09.452 Deallocate in Write Zeroes: Not Supported 00:26:09.452 Deallocated Guard Field: 0xFFFF 00:26:09.452 Flush: Supported 00:26:09.452 Reservation: Not Supported 00:26:09.452 Namespace Sharing Capabilities: Multiple Controllers 00:26:09.452 Size (in LBAs): 4194304 (2GiB) 00:26:09.452 Capacity (in LBAs): 4194304 (2GiB) 00:26:09.452 Utilization (in LBAs): 4194304 (2GiB) 00:26:09.452 UUID: 1c97c068-1509-4ae2-a430-454de3b0a279 00:26:09.452 Thin Provisioning: Not Supported 00:26:09.452 Per-NS Atomic Units: Yes 00:26:09.452 Atomic Boundary Size (Normal): 0 00:26:09.452 Atomic Boundary Size (PFail): 0 00:26:09.452 Atomic Boundary Offset: 0 00:26:09.452 NGUID/EUI64 Never Reused: No 00:26:09.452 ANA group ID: 1 00:26:09.452 Namespace Write Protected: No 00:26:09.452 Number of LBA Formats: 1 00:26:09.452 Current LBA Format: LBA Format #00 00:26:09.452 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:09.452 00:26:09.452 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:09.452 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:09.452 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:09.452 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:09.452 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:09.452 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:09.452 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:09.452 rmmod nvme_tcp 00:26:09.452 rmmod nvme_fabrics 00:26:09.452 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:09.452 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:09.452 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:09.452 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:09.452 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:09.452 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:09.452 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:09.452 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:09.452 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:09.452 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:09.452 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:09.453 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:09.453 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:09.453 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.453 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.453 10:04:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.359 10:04:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:11.618 10:04:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:11.618 10:04:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:11.618 10:04:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:11.618 10:04:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:11.618 10:04:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:11.618 10:04:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:11.618 10:04:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:11.618 10:04:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:11.618 10:04:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:11.618 10:04:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:14.910 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:26:14.910 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:14.910 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:14.910 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:14.910 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:14.910 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:14.910 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:14.910 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:14.910 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:14.910 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:14.910 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:14.910 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:14.910 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:14.910 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:14.910 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:14.910 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:14.910 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:15.846 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:16.105 00:26:16.105 real 0m18.662s 00:26:16.105 user 0m5.109s 00:26:16.105 sys 0m9.881s 00:26:16.105 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:16.105 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:16.105 ************************************ 00:26:16.105 END TEST nvmf_identify_kernel_target 00:26:16.105 ************************************ 00:26:16.105 10:04:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:16.105 10:04:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:16.105 10:04:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:16.105 10:04:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.105 ************************************ 00:26:16.105 START TEST nvmf_auth_host 00:26:16.105 ************************************ 00:26:16.105 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:16.105 * Looking for test storage... 00:26:16.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:16.105 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:16.105 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:26:16.105 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:16.105 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:16.105 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:16.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.106 --rc genhtml_branch_coverage=1 00:26:16.106 --rc genhtml_function_coverage=1 00:26:16.106 --rc genhtml_legend=1 00:26:16.106 --rc geninfo_all_blocks=1 00:26:16.106 --rc geninfo_unexecuted_blocks=1 00:26:16.106 00:26:16.106 ' 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:16.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.106 --rc genhtml_branch_coverage=1 00:26:16.106 --rc genhtml_function_coverage=1 00:26:16.106 --rc genhtml_legend=1 00:26:16.106 --rc geninfo_all_blocks=1 00:26:16.106 --rc geninfo_unexecuted_blocks=1 00:26:16.106 00:26:16.106 ' 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:16.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.106 --rc genhtml_branch_coverage=1 00:26:16.106 --rc genhtml_function_coverage=1 00:26:16.106 --rc genhtml_legend=1 00:26:16.106 --rc geninfo_all_blocks=1 00:26:16.106 --rc geninfo_unexecuted_blocks=1 00:26:16.106 00:26:16.106 ' 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:16.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.106 --rc genhtml_branch_coverage=1 00:26:16.106 --rc genhtml_function_coverage=1 00:26:16.106 --rc genhtml_legend=1 00:26:16.106 --rc geninfo_all_blocks=1 00:26:16.106 --rc geninfo_unexecuted_blocks=1 00:26:16.106 00:26:16.106 ' 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:16.106 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:16.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:16.366 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:22.939 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:22.939 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:22.939 Found net devices under 0000:af:00.0: cvl_0_0 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:22.939 Found net devices under 0000:af:00.1: cvl_0_1 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:22.939 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:22.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:22.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:26:22.940 00:26:22.940 --- 10.0.0.2 ping statistics --- 00:26:22.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.940 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:22.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:22.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:26:22.940 00:26:22.940 --- 10.0.0.1 ping statistics --- 00:26:22.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.940 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=219523 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 219523 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 219523 ']' 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:22.940 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.199 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:23.199 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:23.199 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:23.199 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:23.199 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.199 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.199 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:23.199 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:23.199 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:23.199 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.199 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:23.199 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:23.199 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:23.199 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:23.199 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=203a7dab4de0ea8ebeee95839229ff0b 00:26:23.199 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:23.199 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.NaK 00:26:23.199 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 203a7dab4de0ea8ebeee95839229ff0b 0 00:26:23.200 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 203a7dab4de0ea8ebeee95839229ff0b 0 00:26:23.200 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:23.200 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:23.200 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=203a7dab4de0ea8ebeee95839229ff0b 00:26:23.200 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:23.200 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:23.200 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.NaK 00:26:23.200 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.NaK 00:26:23.200 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.NaK 00:26:23.200 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:23.200 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:23.200 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.200 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:23.200 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:23.200 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:23.200 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=27711c4921c9f09198ff74e07d79e7cac03d48b0cf1ad80c9cd5e48b1a1461ea 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hCd 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 27711c4921c9f09198ff74e07d79e7cac03d48b0cf1ad80c9cd5e48b1a1461ea 3 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 27711c4921c9f09198ff74e07d79e7cac03d48b0cf1ad80c9cd5e48b1a1461ea 3 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=27711c4921c9f09198ff74e07d79e7cac03d48b0cf1ad80c9cd5e48b1a1461ea 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hCd 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hCd 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.hCd 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3af7d0e0d7789bbe62e62d7363eeb69c289d8e09d1f4d1f3 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.hxO 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3af7d0e0d7789bbe62e62d7363eeb69c289d8e09d1f4d1f3 0 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3af7d0e0d7789bbe62e62d7363eeb69c289d8e09d1f4d1f3 0 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:23.459 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3af7d0e0d7789bbe62e62d7363eeb69c289d8e09d1f4d1f3 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.hxO 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.hxO 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.hxO 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a814e41fc2a4b8656a1d36b4921c75938dce942acf7706fe 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4IL 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a814e41fc2a4b8656a1d36b4921c75938dce942acf7706fe 2 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a814e41fc2a4b8656a1d36b4921c75938dce942acf7706fe 2 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a814e41fc2a4b8656a1d36b4921c75938dce942acf7706fe 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4IL 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4IL 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.4IL 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a84a5b810e133c86aca4fd6690179291 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ROr 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a84a5b810e133c86aca4fd6690179291 1 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a84a5b810e133c86aca4fd6690179291 1 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a84a5b810e133c86aca4fd6690179291 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:23.460 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ROr 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ROr 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ROr 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c90ab672c871ae785fbd5a0296e37c61 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.iqm 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c90ab672c871ae785fbd5a0296e37c61 1 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c90ab672c871ae785fbd5a0296e37c61 1 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c90ab672c871ae785fbd5a0296e37c61 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:23.460 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:23.719 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.iqm 00:26:23.719 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.iqm 00:26:23.719 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.iqm 00:26:23.719 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:23.719 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:23.719 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.719 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b98f9af565b0dc7e39def7d8ffabcfc74fdcf00378c5ebe9 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ztP 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b98f9af565b0dc7e39def7d8ffabcfc74fdcf00378c5ebe9 2 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b98f9af565b0dc7e39def7d8ffabcfc74fdcf00378c5ebe9 2 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b98f9af565b0dc7e39def7d8ffabcfc74fdcf00378c5ebe9 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ztP 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ztP 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ztP 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1e95c563b8334286e6d9040f752de792 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.KJq 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1e95c563b8334286e6d9040f752de792 0 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1e95c563b8334286e6d9040f752de792 0 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1e95c563b8334286e6d9040f752de792 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.KJq 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.KJq 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.KJq 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a5fb499c32c486f06d2ac2767c6fa724e022da57100ed048cb2c25f1721330d6 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Ohu 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a5fb499c32c486f06d2ac2767c6fa724e022da57100ed048cb2c25f1721330d6 3 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a5fb499c32c486f06d2ac2767c6fa724e022da57100ed048cb2c25f1721330d6 3 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a5fb499c32c486f06d2ac2767c6fa724e022da57100ed048cb2c25f1721330d6 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Ohu 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Ohu 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Ohu 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 219523 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 219523 ']' 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:23.720 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NaK 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.hCd ]] 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hCd 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.hxO 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.4IL ]] 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4IL 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ROr 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.iqm ]] 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iqm 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ztP 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.KJq ]] 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.KJq 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Ohu 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:23.980 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:24.239 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:24.239 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:27.529 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:26:27.529 Waiting for block devices as requested 00:26:27.529 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:27.529 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:27.529 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:27.529 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:27.529 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:27.788 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:27.788 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:27.788 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:27.788 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:28.047 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:28.047 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:28.047 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:28.306 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:28.306 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:28.306 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:28.306 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:28.565 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:29.133 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:29.133 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:29.133 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:29.133 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:29.133 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:29.133 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:29.133 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:29.133 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:29.133 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:29.133 No valid GPT data, bailing 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:26:29.134 No valid GPT data, bailing 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # continue 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:29.134 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:29.393 00:26:29.393 Discovery Log Number of Records 2, Generation counter 2 00:26:29.393 =====Discovery Log Entry 0====== 00:26:29.393 trtype: tcp 00:26:29.393 adrfam: ipv4 00:26:29.393 subtype: current discovery subsystem 00:26:29.393 treq: not specified, sq flow control disable supported 00:26:29.393 portid: 1 00:26:29.393 trsvcid: 4420 00:26:29.393 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:29.393 traddr: 10.0.0.1 00:26:29.393 eflags: none 00:26:29.393 sectype: none 00:26:29.393 =====Discovery Log Entry 1====== 00:26:29.393 trtype: tcp 00:26:29.393 adrfam: ipv4 00:26:29.393 subtype: nvme subsystem 00:26:29.393 treq: not specified, sq flow control disable supported 00:26:29.393 portid: 1 00:26:29.393 trsvcid: 4420 00:26:29.394 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:29.394 traddr: 10.0.0.1 00:26:29.394 eflags: none 00:26:29.394 sectype: none 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: ]] 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.394 nvme0n1 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.394 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.653 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.653 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.653 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.653 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.653 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.653 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.653 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:29.653 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:29.653 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.653 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:29.653 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.653 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: ]] 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.654 nvme0n1 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.654 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: ]] 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.913 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.914 nvme0n1 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: ]] 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.173 nvme0n1 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: ]] 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:30.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.433 nvme0n1 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:30.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.434 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.693 nvme0n1 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: ]] 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.694 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.694 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.694 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.694 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.694 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.694 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.694 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:30.694 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.694 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.953 nvme0n1 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: ]] 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.953 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.212 nvme0n1 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: ]] 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.212 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:31.213 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.213 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.471 nvme0n1 00:26:31.471 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.471 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.471 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.471 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: ]] 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.472 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.731 nvme0n1 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:31.731 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.732 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.991 nvme0n1 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: ]] 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.991 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.250 nvme0n1 00:26:32.250 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.250 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.250 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.250 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.250 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.250 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.250 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.250 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.250 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.250 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.250 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.250 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.250 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:32.250 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.250 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.250 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:32.250 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.250 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: ]] 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.251 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.510 nvme0n1 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: ]] 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.510 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.769 nvme0n1 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.769 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: ]] 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.028 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.029 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.029 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.029 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.029 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.029 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.029 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.029 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.029 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:33.029 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.029 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.288 nvme0n1 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.288 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.547 nvme0n1 00:26:33.547 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.547 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.547 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.547 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.547 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.547 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.547 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.547 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.547 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.547 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.547 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.547 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:33.547 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.547 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:33.547 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.547 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.547 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:33.547 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:33.547 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: ]] 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.548 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.115 nvme0n1 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: ]] 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.116 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.375 nvme0n1 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: ]] 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.375 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:34.376 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.376 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.376 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.376 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.376 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.376 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.376 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.376 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.376 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.376 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.376 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.376 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.376 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.376 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.376 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:34.376 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.376 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.944 nvme0n1 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: ]] 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.944 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.203 nvme0n1 00:26:35.203 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.203 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.203 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.203 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.203 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.203 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.203 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.203 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.203 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.203 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.462 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.721 nvme0n1 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: ]] 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.721 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.288 nvme0n1 00:26:36.288 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.288 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.288 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.288 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.288 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.288 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: ]] 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.547 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.114 nvme0n1 00:26:37.114 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.114 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.114 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.114 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.114 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.114 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.114 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: ]] 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.115 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.683 nvme0n1 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: ]] 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.683 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.251 nvme0n1 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.251 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.819 nvme0n1 00:26:38.819 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.819 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.819 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.819 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.819 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: ]] 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.078 nvme0n1 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.078 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: ]] 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:39.337 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.338 nvme0n1 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:39.338 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: ]] 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.600 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.600 nvme0n1 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: ]] 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:39.600 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.601 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.954 nvme0n1 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.954 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.955 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.955 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.955 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.955 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:39.955 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.955 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.234 nvme0n1 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: ]] 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.234 nvme0n1 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.234 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: ]] 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.493 10:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.493 nvme0n1 00:26:40.493 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.493 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.493 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.493 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.493 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.494 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.494 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.750 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.750 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.750 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: ]] 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.751 nvme0n1 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.751 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: ]] 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:41.009 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.010 nvme0n1 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.010 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.268 nvme0n1 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.268 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: ]] 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.527 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:41.528 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.528 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.528 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.528 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.528 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.528 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.528 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.528 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.528 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.528 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.528 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.528 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.528 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.528 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.528 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:41.528 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.528 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.785 nvme0n1 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: ]] 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.785 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.043 nvme0n1 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: ]] 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.043 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.302 nvme0n1 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: ]] 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.302 10:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.560 nvme0n1 00:26:42.560 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.560 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.560 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.560 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.560 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.560 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.560 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.560 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.560 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.560 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.560 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.819 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.078 nvme0n1 00:26:43.078 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.078 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.078 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: ]] 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.079 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.338 nvme0n1 00:26:43.338 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.338 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.338 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.338 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.338 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.338 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.338 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.338 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.338 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.338 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: ]] 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.597 10:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.856 nvme0n1 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: ]] 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.856 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.423 nvme0n1 00:26:44.423 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.423 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.423 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.423 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.423 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.423 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.423 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: ]] 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.424 10:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.683 nvme0n1 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.683 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.942 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.942 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.942 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.942 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.942 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.942 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.942 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.942 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.942 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.942 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.942 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.942 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:44.942 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.942 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.201 nvme0n1 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: ]] 00:26:45.201 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.202 10:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.770 nvme0n1 00:26:45.770 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.770 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.770 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.770 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.770 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.770 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.770 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.770 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.770 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.770 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.770 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.770 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.770 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:45.770 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.770 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.770 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:45.770 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: ]] 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.030 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.598 nvme0n1 00:26:46.598 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.598 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.598 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.598 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.598 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.598 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.598 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.598 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.598 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.598 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.598 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.598 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.598 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:46.598 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: ]] 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.598 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.167 nvme0n1 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: ]] 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.167 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.168 10:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.736 nvme0n1 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.736 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.995 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.995 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.995 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.995 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.995 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.995 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.995 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.995 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.995 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.995 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.995 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.995 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:47.995 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.995 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.564 nvme0n1 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: ]] 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.564 10:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.564 nvme0n1 00:26:48.564 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.564 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.564 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.564 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.564 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.564 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.823 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.823 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: ]] 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.824 nvme0n1 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.824 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: ]] 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.084 nvme0n1 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: ]] 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.084 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.344 nvme0n1 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.344 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.345 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:49.345 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.345 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.604 nvme0n1 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: ]] 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.604 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.863 nvme0n1 00:26:49.863 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.863 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.863 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: ]] 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.864 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.123 nvme0n1 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: ]] 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.123 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.383 nvme0n1 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: ]] 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:50.383 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.384 10:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.643 nvme0n1 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.643 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.902 nvme0n1 00:26:50.902 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.902 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.902 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.902 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.902 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.902 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: ]] 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.903 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.162 nvme0n1 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: ]] 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.162 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.422 nvme0n1 00:26:51.422 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.422 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.422 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.422 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.422 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.422 10:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: ]] 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.681 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.940 nvme0n1 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: ]] 00:26:51.940 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.941 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.200 nvme0n1 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.200 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.459 nvme0n1 00:26:52.459 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.459 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.459 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.459 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.459 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.459 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.459 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.459 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.459 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.459 10:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: ]] 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.459 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.718 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.718 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.718 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:52.718 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.718 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.977 nvme0n1 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: ]] 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.977 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.546 nvme0n1 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: ]] 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.546 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:53.547 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.547 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.547 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.547 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.547 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.547 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.547 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.547 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.547 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.547 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.547 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.547 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.547 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.547 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.547 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:53.547 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.547 10:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.806 nvme0n1 00:26:53.806 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.806 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.806 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.806 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.806 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.806 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.806 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.806 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.806 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.806 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.065 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.065 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.065 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:54.065 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.065 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.065 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:54.065 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:54.065 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:54.065 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:54.065 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.065 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:54.065 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:54.065 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: ]] 00:26:54.065 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.066 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.325 nvme0n1 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.325 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.326 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.326 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.326 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.326 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.326 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.326 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:54.326 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.326 10:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.893 nvme0n1 00:26:54.893 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAzYTdkYWI0ZGUwZWE4ZWJlZWU5NTgzOTIyOWZmMGLAw+z/: 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: ]] 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc3MTFjNDkyMWM5ZjA5MTk4ZmY3NGUwN2Q3OWU3Y2FjMDNkNDhiMGNmMWFkODBjOWNkNWU0OGIxYTE0NjFlYYDvmzQ=: 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.894 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.462 nvme0n1 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: ]] 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.462 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.463 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.463 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.463 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.463 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.463 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:55.463 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.463 10:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.031 nvme0n1 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: ]] 00:26:56.031 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.290 10:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.859 nvme0n1 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yjk4ZjlhZjU2NWIwZGM3ZTM5ZGVmN2Q4ZmZhYmNmYzc0ZmRjZjAwMzc4YzVlYmU56/7FFw==: 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: ]] 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU5NWM1NjNiODMzNDI4NmU2ZDkwNDBmNzUyZGU3OTJgbYkK: 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.859 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.428 nvme0n1 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYjQ5OWMzMmM0ODZmMDZkMmFjMjc2N2M2ZmE3MjRlMDIyZGE1NzEwMGVkMDQ4Y2IyYzI1ZjE3MjEzMzBkNq9p58M=: 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:57.428 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.429 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.429 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.429 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.429 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.429 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.429 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.429 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.429 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.429 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.429 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.429 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.429 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.429 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.429 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.429 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.429 10:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.998 nvme0n1 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: ]] 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.998 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.258 request: 00:26:58.258 { 00:26:58.258 "name": "nvme0", 00:26:58.258 "trtype": "tcp", 00:26:58.258 "traddr": "10.0.0.1", 00:26:58.258 "adrfam": "ipv4", 00:26:58.258 "trsvcid": "4420", 00:26:58.258 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:58.258 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:58.258 "prchk_reftag": false, 00:26:58.258 "prchk_guard": false, 00:26:58.258 "hdgst": false, 00:26:58.258 "ddgst": false, 00:26:58.258 "allow_unrecognized_csi": false, 00:26:58.258 "method": "bdev_nvme_attach_controller", 00:26:58.258 "req_id": 1 00:26:58.258 } 00:26:58.258 Got JSON-RPC error response 00:26:58.258 response: 00:26:58.258 { 00:26:58.258 "code": -5, 00:26:58.258 "message": "Input/output error" 00:26:58.258 } 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:58.258 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.259 request: 00:26:58.259 { 00:26:58.259 "name": "nvme0", 00:26:58.259 "trtype": "tcp", 00:26:58.259 "traddr": "10.0.0.1", 00:26:58.259 "adrfam": "ipv4", 00:26:58.259 "trsvcid": "4420", 00:26:58.259 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:58.259 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:58.259 "prchk_reftag": false, 00:26:58.259 "prchk_guard": false, 00:26:58.259 "hdgst": false, 00:26:58.259 "ddgst": false, 00:26:58.259 "dhchap_key": "key2", 00:26:58.259 "allow_unrecognized_csi": false, 00:26:58.259 "method": "bdev_nvme_attach_controller", 00:26:58.259 "req_id": 1 00:26:58.259 } 00:26:58.259 Got JSON-RPC error response 00:26:58.259 response: 00:26:58.259 { 00:26:58.259 "code": -5, 00:26:58.259 "message": "Input/output error" 00:26:58.259 } 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.259 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.519 request: 00:26:58.519 { 00:26:58.519 "name": "nvme0", 00:26:58.519 "trtype": "tcp", 00:26:58.519 "traddr": "10.0.0.1", 00:26:58.519 "adrfam": "ipv4", 00:26:58.519 "trsvcid": "4420", 00:26:58.519 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:58.519 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:58.519 "prchk_reftag": false, 00:26:58.519 "prchk_guard": false, 00:26:58.519 "hdgst": false, 00:26:58.519 "ddgst": false, 00:26:58.519 "dhchap_key": "key1", 00:26:58.519 "dhchap_ctrlr_key": "ckey2", 00:26:58.519 "allow_unrecognized_csi": false, 00:26:58.519 "method": "bdev_nvme_attach_controller", 00:26:58.519 "req_id": 1 00:26:58.519 } 00:26:58.519 Got JSON-RPC error response 00:26:58.519 response: 00:26:58.519 { 00:26:58.519 "code": -5, 00:26:58.519 "message": "Input/output error" 00:26:58.519 } 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.519 10:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.519 nvme0n1 00:26:58.519 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.519 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:58.519 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.519 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.519 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:58.520 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:58.520 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:58.520 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:58.520 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.520 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:58.520 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:26:58.520 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: ]] 00:26:58.520 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:26:58.520 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:58.520 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.520 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.780 request: 00:26:58.780 { 00:26:58.780 "name": "nvme0", 00:26:58.780 "dhchap_key": "key1", 00:26:58.780 "dhchap_ctrlr_key": "ckey2", 00:26:58.780 "method": "bdev_nvme_set_keys", 00:26:58.780 "req_id": 1 00:26:58.780 } 00:26:58.780 Got JSON-RPC error response 00:26:58.780 response: 00:26:58.780 { 00:26:58.780 "code": -13, 00:26:58.780 "message": "Permission denied" 00:26:58.780 } 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:58.780 10:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:59.718 10:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.718 10:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:59.718 10:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.718 10:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.718 10:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.977 10:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:59.977 10:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FmN2QwZTBkNzc4OWJiZTYyZTYyZDczNjNlZWI2OWMyODlkOGUwOWQxZjRkMWYzUNJlJQ==: 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: ]] 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTgxNGU0MWZjMmE0Yjg2NTZhMWQzNmI0OTIxYzc1OTM4ZGNlOTQyYWNmNzcwNmZlYlR1uQ==: 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.915 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.916 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:00.916 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.916 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.174 nvme0n1 00:27:01.174 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.174 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:01.174 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.174 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.174 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.174 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.174 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:27:01.174 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:27:01.174 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.174 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.174 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTg0YTViODEwZTEzM2M4NmFjYTRmZDY2OTAxNzkyOTHMnJ+z: 00:27:01.174 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: ]] 00:27:01.174 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkwYWI2NzJjODcxYWU3ODVmYmQ1YTAyOTZlMzdjNjFiCnEl: 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.175 request: 00:27:01.175 { 00:27:01.175 "name": "nvme0", 00:27:01.175 "dhchap_key": "key2", 00:27:01.175 "dhchap_ctrlr_key": "ckey1", 00:27:01.175 "method": "bdev_nvme_set_keys", 00:27:01.175 "req_id": 1 00:27:01.175 } 00:27:01.175 Got JSON-RPC error response 00:27:01.175 response: 00:27:01.175 { 00:27:01.175 "code": -13, 00:27:01.175 "message": "Permission denied" 00:27:01.175 } 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:01.175 10:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:02.114 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.114 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:02.114 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.114 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.114 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.373 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:02.374 rmmod nvme_tcp 00:27:02.374 rmmod nvme_fabrics 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 219523 ']' 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 219523 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 219523 ']' 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 219523 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 219523 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 219523' 00:27:02.374 killing process with pid 219523 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 219523 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 219523 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:02.374 10:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.912 10:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:04.912 10:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:04.912 10:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:04.912 10:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:04.912 10:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:04.912 10:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:04.912 10:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:04.912 10:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:04.912 10:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:04.912 10:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:04.912 10:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:04.912 10:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:04.912 10:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:07.448 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:27:08.016 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:08.016 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:08.016 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:08.016 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:08.016 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:08.016 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:08.016 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:08.016 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:08.016 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:08.016 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:08.016 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:08.016 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:08.016 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:08.016 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:08.016 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:08.016 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:08.954 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:08.954 10:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.NaK /tmp/spdk.key-null.hxO /tmp/spdk.key-sha256.ROr /tmp/spdk.key-sha384.ztP /tmp/spdk.key-sha512.Ohu /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:08.954 10:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:12.338 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:27:12.338 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:12.338 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:12.338 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:12.338 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:12.338 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:12.338 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:12.338 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:12.338 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:12.338 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:12.338 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:12.338 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:12.338 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:12.338 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:12.338 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:12.338 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:12.338 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:12.338 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:12.338 00:27:12.338 real 0m56.314s 00:27:12.338 user 0m50.158s 00:27:12.338 sys 0m14.244s 00:27:12.338 10:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:12.338 10:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.338 ************************************ 00:27:12.338 END TEST nvmf_auth_host 00:27:12.338 ************************************ 00:27:12.338 10:05:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:12.338 10:05:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:12.338 10:05:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:12.339 10:05:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:12.339 10:05:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.339 ************************************ 00:27:12.339 START TEST nvmf_digest 00:27:12.339 ************************************ 00:27:12.339 10:05:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:12.598 * Looking for test storage... 00:27:12.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:12.598 10:05:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:12.598 10:05:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:27:12.598 10:05:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:12.598 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:12.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.599 --rc genhtml_branch_coverage=1 00:27:12.599 --rc genhtml_function_coverage=1 00:27:12.599 --rc genhtml_legend=1 00:27:12.599 --rc geninfo_all_blocks=1 00:27:12.599 --rc geninfo_unexecuted_blocks=1 00:27:12.599 00:27:12.599 ' 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:12.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.599 --rc genhtml_branch_coverage=1 00:27:12.599 --rc genhtml_function_coverage=1 00:27:12.599 --rc genhtml_legend=1 00:27:12.599 --rc geninfo_all_blocks=1 00:27:12.599 --rc geninfo_unexecuted_blocks=1 00:27:12.599 00:27:12.599 ' 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:12.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.599 --rc genhtml_branch_coverage=1 00:27:12.599 --rc genhtml_function_coverage=1 00:27:12.599 --rc genhtml_legend=1 00:27:12.599 --rc geninfo_all_blocks=1 00:27:12.599 --rc geninfo_unexecuted_blocks=1 00:27:12.599 00:27:12.599 ' 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:12.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.599 --rc genhtml_branch_coverage=1 00:27:12.599 --rc genhtml_function_coverage=1 00:27:12.599 --rc genhtml_legend=1 00:27:12.599 --rc geninfo_all_blocks=1 00:27:12.599 --rc geninfo_unexecuted_blocks=1 00:27:12.599 00:27:12.599 ' 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:12.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:12.599 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:19.172 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:19.172 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:19.172 Found net devices under 0000:af:00.0: cvl_0_0 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:19.172 Found net devices under 0000:af:00.1: cvl_0_1 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:19.172 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:19.431 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:19.431 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:19.431 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:19.431 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:19.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:19.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:27:19.431 00:27:19.431 --- 10.0.0.2 ping statistics --- 00:27:19.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.431 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:19.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:19.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:27:19.432 00:27:19.432 --- 10.0.0.1 ping statistics --- 00:27:19.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.432 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:19.432 ************************************ 00:27:19.432 START TEST nvmf_digest_clean 00:27:19.432 ************************************ 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=234421 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 234421 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 234421 ']' 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:19.432 10:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:19.432 [2024-12-11 10:05:28.905971] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:27:19.432 [2024-12-11 10:05:28.906015] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.432 [2024-12-11 10:05:28.989862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.690 [2024-12-11 10:05:29.028170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.690 [2024-12-11 10:05:29.028201] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.690 [2024-12-11 10:05:29.028209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.690 [2024-12-11 10:05:29.028214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.690 [2024-12-11 10:05:29.028224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.690 [2024-12-11 10:05:29.028748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.258 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:20.258 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:20.258 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:20.258 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:20.258 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:20.258 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.258 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:20.258 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:20.258 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:20.258 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.258 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:20.517 null0 00:27:20.517 [2024-12-11 10:05:29.855534] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.517 [2024-12-11 10:05:29.879722] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:20.517 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.517 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:20.517 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:20.517 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:20.517 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:20.517 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:20.517 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:20.517 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:20.517 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=234494 00:27:20.517 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 234494 /var/tmp/bperf.sock 00:27:20.517 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:20.517 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 234494 ']' 00:27:20.517 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:20.517 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:20.517 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:20.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:20.517 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:20.517 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:20.517 [2024-12-11 10:05:29.934350] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:27:20.517 [2024-12-11 10:05:29.934392] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid234494 ] 00:27:20.517 [2024-12-11 10:05:30.013877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.517 [2024-12-11 10:05:30.061637] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.517 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:20.517 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:20.517 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:20.517 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:20.517 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:21.085 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:21.085 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:21.085 nvme0n1 00:27:21.344 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:21.344 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:21.344 Running I/O for 2 seconds... 00:27:23.217 25607.00 IOPS, 100.03 MiB/s [2024-12-11T09:05:32.792Z] 25757.00 IOPS, 100.61 MiB/s 00:27:23.217 Latency(us) 00:27:23.217 [2024-12-11T09:05:32.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.217 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:23.217 nvme0n1 : 2.00 25781.77 100.71 0.00 0.00 4960.08 2527.82 17351.44 00:27:23.217 [2024-12-11T09:05:32.792Z] =================================================================================================================== 00:27:23.217 [2024-12-11T09:05:32.792Z] Total : 25781.77 100.71 0.00 0.00 4960.08 2527.82 17351.44 00:27:23.217 { 00:27:23.217 "results": [ 00:27:23.217 { 00:27:23.217 "job": "nvme0n1", 00:27:23.217 "core_mask": "0x2", 00:27:23.217 "workload": "randread", 00:27:23.217 "status": "finished", 00:27:23.217 "queue_depth": 128, 00:27:23.217 "io_size": 4096, 00:27:23.217 "runtime": 2.003043, 00:27:23.217 "iops": 25781.773032331308, 00:27:23.217 "mibps": 100.71005090754417, 00:27:23.217 "io_failed": 0, 00:27:23.217 "io_timeout": 0, 00:27:23.217 "avg_latency_us": 4960.080214664697, 00:27:23.217 "min_latency_us": 2527.8171428571427, 00:27:23.217 "max_latency_us": 17351.43619047619 00:27:23.217 } 00:27:23.217 ], 00:27:23.217 "core_count": 1 00:27:23.217 } 00:27:23.217 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:23.217 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:23.217 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:23.217 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:23.217 | select(.opcode=="crc32c") 00:27:23.217 | "\(.module_name) \(.executed)"' 00:27:23.217 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:23.476 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:23.476 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:23.476 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:23.476 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:23.476 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 234494 00:27:23.477 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 234494 ']' 00:27:23.477 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 234494 00:27:23.477 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:23.477 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:23.477 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 234494 00:27:23.477 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:23.477 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:23.477 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 234494' 00:27:23.477 killing process with pid 234494 00:27:23.477 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 234494 00:27:23.477 Received shutdown signal, test time was about 2.000000 seconds 00:27:23.477 00:27:23.477 Latency(us) 00:27:23.477 [2024-12-11T09:05:33.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.477 [2024-12-11T09:05:33.052Z] =================================================================================================================== 00:27:23.477 [2024-12-11T09:05:33.052Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:23.477 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 234494 00:27:23.736 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:23.736 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:23.736 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:23.736 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:23.736 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:23.736 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:23.736 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:23.736 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=235161 00:27:23.736 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 235161 /var/tmp/bperf.sock 00:27:23.736 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:23.736 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 235161 ']' 00:27:23.736 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:23.736 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:23.736 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:23.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:23.736 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:23.736 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:23.736 [2024-12-11 10:05:33.239688] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:27:23.736 [2024-12-11 10:05:33.239736] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235161 ] 00:27:23.736 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:23.736 Zero copy mechanism will not be used. 00:27:23.995 [2024-12-11 10:05:33.317140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.995 [2024-12-11 10:05:33.358050] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.995 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:23.995 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:23.995 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:23.995 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:23.995 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:24.254 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:24.254 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:24.513 nvme0n1 00:27:24.513 10:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:24.513 10:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:24.772 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:24.772 Zero copy mechanism will not be used. 00:27:24.772 Running I/O for 2 seconds... 00:27:26.643 5324.00 IOPS, 665.50 MiB/s [2024-12-11T09:05:36.218Z] 5737.00 IOPS, 717.12 MiB/s 00:27:26.643 Latency(us) 00:27:26.643 [2024-12-11T09:05:36.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.643 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:26.643 nvme0n1 : 2.00 5737.09 717.14 0.00 0.00 2786.24 631.95 6303.94 00:27:26.643 [2024-12-11T09:05:36.218Z] =================================================================================================================== 00:27:26.643 [2024-12-11T09:05:36.218Z] Total : 5737.09 717.14 0.00 0.00 2786.24 631.95 6303.94 00:27:26.643 { 00:27:26.643 "results": [ 00:27:26.643 { 00:27:26.643 "job": "nvme0n1", 00:27:26.643 "core_mask": "0x2", 00:27:26.643 "workload": "randread", 00:27:26.643 "status": "finished", 00:27:26.643 "queue_depth": 16, 00:27:26.643 "io_size": 131072, 00:27:26.643 "runtime": 2.002759, 00:27:26.643 "iops": 5737.085690290244, 00:27:26.643 "mibps": 717.1357112862805, 00:27:26.643 "io_failed": 0, 00:27:26.643 "io_timeout": 0, 00:27:26.643 "avg_latency_us": 2786.2418275104646, 00:27:26.643 "min_latency_us": 631.9542857142857, 00:27:26.643 "max_latency_us": 6303.939047619047 00:27:26.643 } 00:27:26.643 ], 00:27:26.643 "core_count": 1 00:27:26.643 } 00:27:26.643 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:26.643 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:26.643 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:26.643 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:26.643 | select(.opcode=="crc32c") 00:27:26.643 | "\(.module_name) \(.executed)"' 00:27:26.643 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:26.903 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:26.903 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:26.903 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:26.903 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:26.903 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 235161 00:27:26.903 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 235161 ']' 00:27:26.903 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 235161 00:27:26.903 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:26.903 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:26.903 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 235161 00:27:26.903 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:26.903 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:26.903 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 235161' 00:27:26.903 killing process with pid 235161 00:27:26.903 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 235161 00:27:26.903 Received shutdown signal, test time was about 2.000000 seconds 00:27:26.903 00:27:26.903 Latency(us) 00:27:26.903 [2024-12-11T09:05:36.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.903 [2024-12-11T09:05:36.478Z] =================================================================================================================== 00:27:26.903 [2024-12-11T09:05:36.478Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:26.903 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 235161 00:27:27.162 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:27.162 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:27.162 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:27.162 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:27.162 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:27.162 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:27.162 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:27.162 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=235642 00:27:27.162 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 235642 /var/tmp/bperf.sock 00:27:27.162 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:27.162 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 235642 ']' 00:27:27.162 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:27.162 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:27.162 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:27.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:27.162 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:27.162 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:27.162 [2024-12-11 10:05:36.619432] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:27:27.162 [2024-12-11 10:05:36.619481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235642 ] 00:27:27.162 [2024-12-11 10:05:36.698495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.162 [2024-12-11 10:05:36.734627] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.421 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:27.422 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:27.422 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:27.422 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:27.422 10:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:27.680 10:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:27.680 10:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:27.939 nvme0n1 00:27:27.939 10:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:27.939 10:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:27.939 Running I/O for 2 seconds... 00:27:30.252 27630.00 IOPS, 107.93 MiB/s [2024-12-11T09:05:39.827Z] 27731.00 IOPS, 108.32 MiB/s 00:27:30.252 Latency(us) 00:27:30.252 [2024-12-11T09:05:39.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.252 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:30.252 nvme0n1 : 2.01 27731.49 108.33 0.00 0.00 4606.35 3432.84 11983.73 00:27:30.252 [2024-12-11T09:05:39.827Z] =================================================================================================================== 00:27:30.252 [2024-12-11T09:05:39.827Z] Total : 27731.49 108.33 0.00 0.00 4606.35 3432.84 11983.73 00:27:30.252 { 00:27:30.252 "results": [ 00:27:30.252 { 00:27:30.252 "job": "nvme0n1", 00:27:30.252 "core_mask": "0x2", 00:27:30.252 "workload": "randwrite", 00:27:30.252 "status": "finished", 00:27:30.252 "queue_depth": 128, 00:27:30.252 "io_size": 4096, 00:27:30.252 "runtime": 2.005734, 00:27:30.252 "iops": 27731.493807254603, 00:27:30.252 "mibps": 108.32614768458829, 00:27:30.252 "io_failed": 0, 00:27:30.252 "io_timeout": 0, 00:27:30.252 "avg_latency_us": 4606.351342308884, 00:27:30.252 "min_latency_us": 3432.8380952380953, 00:27:30.252 "max_latency_us": 11983.725714285714 00:27:30.252 } 00:27:30.252 ], 00:27:30.252 "core_count": 1 00:27:30.252 } 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:30.252 | select(.opcode=="crc32c") 00:27:30.252 | "\(.module_name) \(.executed)"' 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 235642 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 235642 ']' 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 235642 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 235642 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 235642' 00:27:30.252 killing process with pid 235642 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 235642 00:27:30.252 Received shutdown signal, test time was about 2.000000 seconds 00:27:30.252 00:27:30.252 Latency(us) 00:27:30.252 [2024-12-11T09:05:39.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.252 [2024-12-11T09:05:39.827Z] =================================================================================================================== 00:27:30.252 [2024-12-11T09:05:39.827Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:30.252 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 235642 00:27:30.511 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:30.511 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:30.511 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:30.511 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:30.511 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:30.511 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:30.511 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:30.512 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=236187 00:27:30.512 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 236187 /var/tmp/bperf.sock 00:27:30.512 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:30.512 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 236187 ']' 00:27:30.512 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:30.512 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.512 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:30.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:30.512 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.512 10:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:30.512 [2024-12-11 10:05:40.009020] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:27:30.512 [2024-12-11 10:05:40.009073] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236187 ] 00:27:30.512 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:30.512 Zero copy mechanism will not be used. 00:27:30.771 [2024-12-11 10:05:40.096116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.771 [2024-12-11 10:05:40.137257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.771 10:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:30.771 10:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:30.771 10:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:30.771 10:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:30.771 10:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:31.035 10:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:31.035 10:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:31.296 nvme0n1 00:27:31.296 10:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:31.296 10:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:31.296 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:31.296 Zero copy mechanism will not be used. 00:27:31.296 Running I/O for 2 seconds... 00:27:33.240 6329.00 IOPS, 791.12 MiB/s [2024-12-11T09:05:43.074Z] 6781.50 IOPS, 847.69 MiB/s 00:27:33.499 Latency(us) 00:27:33.499 [2024-12-11T09:05:43.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.499 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:33.499 nvme0n1 : 2.00 6779.65 847.46 0.00 0.00 2356.12 1771.03 8238.81 00:27:33.499 [2024-12-11T09:05:43.074Z] =================================================================================================================== 00:27:33.499 [2024-12-11T09:05:43.074Z] Total : 6779.65 847.46 0.00 0.00 2356.12 1771.03 8238.81 00:27:33.499 { 00:27:33.499 "results": [ 00:27:33.499 { 00:27:33.499 "job": "nvme0n1", 00:27:33.499 "core_mask": "0x2", 00:27:33.499 "workload": "randwrite", 00:27:33.499 "status": "finished", 00:27:33.499 "queue_depth": 16, 00:27:33.499 "io_size": 131072, 00:27:33.499 "runtime": 2.002906, 00:27:33.499 "iops": 6779.649169756344, 00:27:33.499 "mibps": 847.456146219543, 00:27:33.499 "io_failed": 0, 00:27:33.499 "io_timeout": 0, 00:27:33.499 "avg_latency_us": 2356.122953440011, 00:27:33.499 "min_latency_us": 1771.032380952381, 00:27:33.499 "max_latency_us": 8238.81142857143 00:27:33.499 } 00:27:33.499 ], 00:27:33.499 "core_count": 1 00:27:33.499 } 00:27:33.499 10:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:33.499 10:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:33.499 10:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:33.499 10:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:33.499 | select(.opcode=="crc32c") 00:27:33.499 | "\(.module_name) \(.executed)"' 00:27:33.499 10:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:33.499 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:33.499 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:33.499 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:33.499 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:33.499 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 236187 00:27:33.499 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 236187 ']' 00:27:33.499 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 236187 00:27:33.499 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:33.499 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.499 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 236187 00:27:33.759 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:33.759 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:33.759 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 236187' 00:27:33.759 killing process with pid 236187 00:27:33.759 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 236187 00:27:33.759 Received shutdown signal, test time was about 2.000000 seconds 00:27:33.759 00:27:33.759 Latency(us) 00:27:33.759 [2024-12-11T09:05:43.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.759 [2024-12-11T09:05:43.334Z] =================================================================================================================== 00:27:33.759 [2024-12-11T09:05:43.334Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:33.759 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 236187 00:27:33.759 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 234421 00:27:33.759 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 234421 ']' 00:27:33.759 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 234421 00:27:33.759 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:33.759 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.759 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 234421 00:27:33.759 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:33.759 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:33.759 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 234421' 00:27:33.759 killing process with pid 234421 00:27:33.759 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 234421 00:27:33.759 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 234421 00:27:34.018 00:27:34.018 real 0m14.613s 00:27:34.018 user 0m27.518s 00:27:34.018 sys 0m4.601s 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:34.018 ************************************ 00:27:34.018 END TEST nvmf_digest_clean 00:27:34.018 ************************************ 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:34.018 ************************************ 00:27:34.018 START TEST nvmf_digest_error 00:27:34.018 ************************************ 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=236830 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 236830 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 236830 ']' 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:34.018 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.018 [2024-12-11 10:05:43.590496] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:27:34.018 [2024-12-11 10:05:43.590555] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.277 [2024-12-11 10:05:43.673755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.278 [2024-12-11 10:05:43.712934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.278 [2024-12-11 10:05:43.712967] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.278 [2024-12-11 10:05:43.712974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.278 [2024-12-11 10:05:43.712980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.278 [2024-12-11 10:05:43.712985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.278 [2024-12-11 10:05:43.713536] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.278 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.278 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:34.278 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:34.278 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:34.278 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.278 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.278 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:34.278 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.278 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.278 [2024-12-11 10:05:43.777962] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:34.278 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.278 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:34.278 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:34.278 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.278 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.537 null0 00:27:34.537 [2024-12-11 10:05:43.872606] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.537 [2024-12-11 10:05:43.896792] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.537 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.537 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:34.537 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:34.537 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:34.537 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:34.537 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:34.537 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=236852 00:27:34.537 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 236852 /var/tmp/bperf.sock 00:27:34.537 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:34.537 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 236852 ']' 00:27:34.537 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:34.537 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:34.537 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:34.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:34.537 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:34.537 10:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.537 [2024-12-11 10:05:43.951043] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:27:34.537 [2024-12-11 10:05:43.951085] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236852 ] 00:27:34.537 [2024-12-11 10:05:44.028894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.537 [2024-12-11 10:05:44.068852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.796 10:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.796 10:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:34.796 10:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:34.796 10:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:34.796 10:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:34.796 10:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.796 10:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.796 10:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.796 10:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:34.796 10:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:35.055 nvme0n1 00:27:35.055 10:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:35.055 10:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.055 10:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:35.055 10:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.055 10:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:35.055 10:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:35.315 Running I/O for 2 seconds... 00:27:35.315 [2024-12-11 10:05:44.708971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.709003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.709013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.315 [2024-12-11 10:05:44.716662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.716686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.716695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.315 [2024-12-11 10:05:44.727510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.727531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.727539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.315 [2024-12-11 10:05:44.736522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.736543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.736551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.315 [2024-12-11 10:05:44.745595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.745615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.745623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.315 [2024-12-11 10:05:44.757412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.757433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.757442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.315 [2024-12-11 10:05:44.767016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.767036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.767044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.315 [2024-12-11 10:05:44.774990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.775011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.775020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.315 [2024-12-11 10:05:44.786508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.786528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.786537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.315 [2024-12-11 10:05:44.794985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.795005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.795013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.315 [2024-12-11 10:05:44.806790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.806810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.806819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.315 [2024-12-11 10:05:44.817582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.817602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.817611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.315 [2024-12-11 10:05:44.826356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.826377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.826385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.315 [2024-12-11 10:05:44.835523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.835542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.835550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.315 [2024-12-11 10:05:44.846243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.846263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.846274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.315 [2024-12-11 10:05:44.856409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.856429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.856437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.315 [2024-12-11 10:05:44.864855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.864874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.864882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.315 [2024-12-11 10:05:44.874641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.874660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.874668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.315 [2024-12-11 10:05:44.884706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.315 [2024-12-11 10:05:44.884725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.315 [2024-12-11 10:05:44.884733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.575 [2024-12-11 10:05:44.893521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.575 [2024-12-11 10:05:44.893540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.575 [2024-12-11 10:05:44.893548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.575 [2024-12-11 10:05:44.904672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.575 [2024-12-11 10:05:44.904691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.575 [2024-12-11 10:05:44.904699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.575 [2024-12-11 10:05:44.917201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.575 [2024-12-11 10:05:44.917225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.575 [2024-12-11 10:05:44.917233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.575 [2024-12-11 10:05:44.928971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.575 [2024-12-11 10:05:44.928990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.575 [2024-12-11 10:05:44.928998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.575 [2024-12-11 10:05:44.941009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.575 [2024-12-11 10:05:44.941032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.575 [2024-12-11 10:05:44.941040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.575 [2024-12-11 10:05:44.952512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.575 [2024-12-11 10:05:44.952532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.575 [2024-12-11 10:05:44.952540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.575 [2024-12-11 10:05:44.963904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.575 [2024-12-11 10:05:44.963924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.575 [2024-12-11 10:05:44.963933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.575 [2024-12-11 10:05:44.972569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.575 [2024-12-11 10:05:44.972589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.575 [2024-12-11 10:05:44.972597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.575 [2024-12-11 10:05:44.982510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.575 [2024-12-11 10:05:44.982531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.575 [2024-12-11 10:05:44.982539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.575 [2024-12-11 10:05:44.993932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.575 [2024-12-11 10:05:44.993953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.575 [2024-12-11 10:05:44.993962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.575 [2024-12-11 10:05:45.002432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.575 [2024-12-11 10:05:45.002452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.575 [2024-12-11 10:05:45.002460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.575 [2024-12-11 10:05:45.013865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.575 [2024-12-11 10:05:45.013885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.575 [2024-12-11 10:05:45.013893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.575 [2024-12-11 10:05:45.023254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.575 [2024-12-11 10:05:45.023273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.575 [2024-12-11 10:05:45.023281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.575 [2024-12-11 10:05:45.031095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.575 [2024-12-11 10:05:45.031115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.575 [2024-12-11 10:05:45.031123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.575 [2024-12-11 10:05:45.041202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.575 [2024-12-11 10:05:45.041227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.575 [2024-12-11 10:05:45.041235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.575 [2024-12-11 10:05:45.050969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.575 [2024-12-11 10:05:45.050989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.576 [2024-12-11 10:05:45.050996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.576 [2024-12-11 10:05:45.059611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.576 [2024-12-11 10:05:45.059630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.576 [2024-12-11 10:05:45.059638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.576 [2024-12-11 10:05:45.068951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.576 [2024-12-11 10:05:45.068971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.576 [2024-12-11 10:05:45.068978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.576 [2024-12-11 10:05:45.078377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.576 [2024-12-11 10:05:45.078396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.576 [2024-12-11 10:05:45.078404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.576 [2024-12-11 10:05:45.086542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.576 [2024-12-11 10:05:45.086561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.576 [2024-12-11 10:05:45.086568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.576 [2024-12-11 10:05:45.096591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.576 [2024-12-11 10:05:45.096610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.576 [2024-12-11 10:05:45.096617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.576 [2024-12-11 10:05:45.106557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.576 [2024-12-11 10:05:45.106576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.576 [2024-12-11 10:05:45.106589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.576 [2024-12-11 10:05:45.116923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.576 [2024-12-11 10:05:45.116943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.576 [2024-12-11 10:05:45.116951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.576 [2024-12-11 10:05:45.125494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.576 [2024-12-11 10:05:45.125513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.576 [2024-12-11 10:05:45.125521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.576 [2024-12-11 10:05:45.133776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.576 [2024-12-11 10:05:45.133795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.576 [2024-12-11 10:05:45.133803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.576 [2024-12-11 10:05:45.142451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.576 [2024-12-11 10:05:45.142470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.576 [2024-12-11 10:05:45.142478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.834 [2024-12-11 10:05:45.152354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.834 [2024-12-11 10:05:45.152373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.834 [2024-12-11 10:05:45.152381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.834 [2024-12-11 10:05:45.162837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.834 [2024-12-11 10:05:45.162857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.834 [2024-12-11 10:05:45.162865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.834 [2024-12-11 10:05:45.171614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.834 [2024-12-11 10:05:45.171633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.834 [2024-12-11 10:05:45.171641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.834 [2024-12-11 10:05:45.183394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.834 [2024-12-11 10:05:45.183414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.834 [2024-12-11 10:05:45.183422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.834 [2024-12-11 10:05:45.195149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.834 [2024-12-11 10:05:45.195178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.834 [2024-12-11 10:05:45.195186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.834 [2024-12-11 10:05:45.206401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.834 [2024-12-11 10:05:45.206421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.834 [2024-12-11 10:05:45.206430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.834 [2024-12-11 10:05:45.217274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.834 [2024-12-11 10:05:45.217293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.217300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.225262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.225281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.225289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.234941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.234960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.234968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.243493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.243512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.243519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.252941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.252960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.252968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.262306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.262326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.262334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.272509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.272528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.272536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.280599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.280619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.280626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.291113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.291132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.291140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.299735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.299754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.299761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.309601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.309621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.309629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.317772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.317791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.317799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.329000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.329019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.329027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.342216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.342241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.342249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.352291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.352311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.352318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.360780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.360803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.360811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.372859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.372879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.372888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.381347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.381367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.381374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.392672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.392691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.392699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.835 [2024-12-11 10:05:45.405334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:35.835 [2024-12-11 10:05:45.405355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.835 [2024-12-11 10:05:45.405363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.417851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.094 [2024-12-11 10:05:45.417871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.094 [2024-12-11 10:05:45.417878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.428603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.094 [2024-12-11 10:05:45.428622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.094 [2024-12-11 10:05:45.428630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.440947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.094 [2024-12-11 10:05:45.440967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.094 [2024-12-11 10:05:45.440974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.449527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.094 [2024-12-11 10:05:45.449547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.094 [2024-12-11 10:05:45.449554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.461731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.094 [2024-12-11 10:05:45.461752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.094 [2024-12-11 10:05:45.461759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.472146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.094 [2024-12-11 10:05:45.472167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.094 [2024-12-11 10:05:45.472175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.480619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.094 [2024-12-11 10:05:45.480639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.094 [2024-12-11 10:05:45.480646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.491780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.094 [2024-12-11 10:05:45.491800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.094 [2024-12-11 10:05:45.491808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.504072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.094 [2024-12-11 10:05:45.504092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.094 [2024-12-11 10:05:45.504100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.512175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.094 [2024-12-11 10:05:45.512194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.094 [2024-12-11 10:05:45.512201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.523116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.094 [2024-12-11 10:05:45.523136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.094 [2024-12-11 10:05:45.523145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.533026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.094 [2024-12-11 10:05:45.533047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.094 [2024-12-11 10:05:45.533055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.541542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.094 [2024-12-11 10:05:45.541561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.094 [2024-12-11 10:05:45.541572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.550716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.094 [2024-12-11 10:05:45.550736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.094 [2024-12-11 10:05:45.550744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.562338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.094 [2024-12-11 10:05:45.562359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.094 [2024-12-11 10:05:45.562367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.571167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.094 [2024-12-11 10:05:45.571188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.094 [2024-12-11 10:05:45.571196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.582591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.094 [2024-12-11 10:05:45.582613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.094 [2024-12-11 10:05:45.582621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.593933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.094 [2024-12-11 10:05:45.593954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.094 [2024-12-11 10:05:45.593962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.094 [2024-12-11 10:05:45.602225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.095 [2024-12-11 10:05:45.602247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.095 [2024-12-11 10:05:45.602254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.095 [2024-12-11 10:05:45.612211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.095 [2024-12-11 10:05:45.612236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.095 [2024-12-11 10:05:45.612244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.095 [2024-12-11 10:05:45.620384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.095 [2024-12-11 10:05:45.620403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.095 [2024-12-11 10:05:45.620410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.095 [2024-12-11 10:05:45.631269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.095 [2024-12-11 10:05:45.631294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.095 [2024-12-11 10:05:45.631302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.095 [2024-12-11 10:05:45.641129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.095 [2024-12-11 10:05:45.641148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.095 [2024-12-11 10:05:45.641155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.095 [2024-12-11 10:05:45.649687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.095 [2024-12-11 10:05:45.649707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.095 [2024-12-11 10:05:45.649715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.095 [2024-12-11 10:05:45.658941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.095 [2024-12-11 10:05:45.658961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.095 [2024-12-11 10:05:45.658969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.354 [2024-12-11 10:05:45.668657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.354 [2024-12-11 10:05:45.668677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.354 [2024-12-11 10:05:45.668685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.354 [2024-12-11 10:05:45.678081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.354 [2024-12-11 10:05:45.678101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.354 [2024-12-11 10:05:45.678109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.354 [2024-12-11 10:05:45.686101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.354 [2024-12-11 10:05:45.686121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.354 [2024-12-11 10:05:45.686129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.354 25373.00 IOPS, 99.11 MiB/s [2024-12-11T09:05:45.929Z] [2024-12-11 10:05:45.698148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.354 [2024-12-11 10:05:45.698169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.354 [2024-12-11 10:05:45.698177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.354 [2024-12-11 10:05:45.706443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.354 [2024-12-11 10:05:45.706463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.354 [2024-12-11 10:05:45.706475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.354 [2024-12-11 10:05:45.716004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.354 [2024-12-11 10:05:45.716023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.354 [2024-12-11 10:05:45.716030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.354 [2024-12-11 10:05:45.724863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.354 [2024-12-11 10:05:45.724882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.354 [2024-12-11 10:05:45.724890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.354 [2024-12-11 10:05:45.734934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.354 [2024-12-11 10:05:45.734955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.354 [2024-12-11 10:05:45.734963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.354 [2024-12-11 10:05:45.743019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.354 [2024-12-11 10:05:45.743040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.354 [2024-12-11 10:05:45.743048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.354 [2024-12-11 10:05:45.752775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.354 [2024-12-11 10:05:45.752795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.354 [2024-12-11 10:05:45.752803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.354 [2024-12-11 10:05:45.762633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.354 [2024-12-11 10:05:45.762654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.354 [2024-12-11 10:05:45.762661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.354 [2024-12-11 10:05:45.771113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.354 [2024-12-11 10:05:45.771134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.354 [2024-12-11 10:05:45.771142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.354 [2024-12-11 10:05:45.781191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.354 [2024-12-11 10:05:45.781212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.354 [2024-12-11 10:05:45.781228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.354 [2024-12-11 10:05:45.791019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.354 [2024-12-11 10:05:45.791043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.354 [2024-12-11 10:05:45.791051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.354 [2024-12-11 10:05:45.799238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.354 [2024-12-11 10:05:45.799258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.355 [2024-12-11 10:05:45.799266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.355 [2024-12-11 10:05:45.808803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.355 [2024-12-11 10:05:45.808824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.355 [2024-12-11 10:05:45.808832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.355 [2024-12-11 10:05:45.817891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.355 [2024-12-11 10:05:45.817911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.355 [2024-12-11 10:05:45.817919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.355 [2024-12-11 10:05:45.828479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.355 [2024-12-11 10:05:45.828500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.355 [2024-12-11 10:05:45.828508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.355 [2024-12-11 10:05:45.836340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.355 [2024-12-11 10:05:45.836360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.355 [2024-12-11 10:05:45.836368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.355 [2024-12-11 10:05:45.845146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.355 [2024-12-11 10:05:45.845166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.355 [2024-12-11 10:05:45.845174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.355 [2024-12-11 10:05:45.855167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.355 [2024-12-11 10:05:45.855187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.355 [2024-12-11 10:05:45.855195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.355 [2024-12-11 10:05:45.865060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.355 [2024-12-11 10:05:45.865079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.355 [2024-12-11 10:05:45.865087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.355 [2024-12-11 10:05:45.873244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.355 [2024-12-11 10:05:45.873264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.355 [2024-12-11 10:05:45.873271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.355 [2024-12-11 10:05:45.884504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.355 [2024-12-11 10:05:45.884524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.355 [2024-12-11 10:05:45.884531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.355 [2024-12-11 10:05:45.893008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.355 [2024-12-11 10:05:45.893028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.355 [2024-12-11 10:05:45.893037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.355 [2024-12-11 10:05:45.905702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.355 [2024-12-11 10:05:45.905722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.355 [2024-12-11 10:05:45.905730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.355 [2024-12-11 10:05:45.917147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.355 [2024-12-11 10:05:45.917168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.355 [2024-12-11 10:05:45.917176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.355 [2024-12-11 10:05:45.926016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.355 [2024-12-11 10:05:45.926037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.355 [2024-12-11 10:05:45.926045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.614 [2024-12-11 10:05:45.937171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.614 [2024-12-11 10:05:45.937192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.614 [2024-12-11 10:05:45.937200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.614 [2024-12-11 10:05:45.945842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.614 [2024-12-11 10:05:45.945862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.614 [2024-12-11 10:05:45.945870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.614 [2024-12-11 10:05:45.954272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.614 [2024-12-11 10:05:45.954291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.614 [2024-12-11 10:05:45.954302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.614 [2024-12-11 10:05:45.963832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.614 [2024-12-11 10:05:45.963852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.614 [2024-12-11 10:05:45.963860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.614 [2024-12-11 10:05:45.972942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.614 [2024-12-11 10:05:45.972962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.614 [2024-12-11 10:05:45.972969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.614 [2024-12-11 10:05:45.984014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:45.984033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:45.984041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:45.993059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:45.993079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:45.993087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.005265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.005285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.005292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.013368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.013387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.013395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.025044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.025064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.025072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.034358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.034377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.034384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.043137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.043159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.043167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.052190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.052210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.052225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.061248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.061268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.061276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.071021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.071040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.071049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.079555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.079575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.079583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.089645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.089664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.089672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.098084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.098103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.098111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.108137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.108156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.108164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.118042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.118061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.118069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.126666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.126687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.126695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.139082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.139103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.139111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.151168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.151188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.151196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.163250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.163269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.163278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.171612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.171631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.171639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.615 [2024-12-11 10:05:46.183438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.615 [2024-12-11 10:05:46.183457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.615 [2024-12-11 10:05:46.183465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.195874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.195892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.195900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.204352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.204371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.204378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.215616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.215639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.215647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.225671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.225690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.225698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.233709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.233729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.233736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.246230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.246250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.246258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.254709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.254728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.254736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.266699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.266718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.266726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.279064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.279083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.279091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.288037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.288057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.288064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.296988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.297007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.297015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.306350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.306369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.306377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.315708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.315727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.315734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.324188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.324207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.324214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.333667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.333687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.333695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.343174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.343193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.343201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.352769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.352788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.352795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.361953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.361973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.361981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.371083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.371102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.371110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.380165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.380185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.380198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.389302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.389322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.389329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.398462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.398481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.398489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.407473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.407492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.407500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.416554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.416574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.416581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.425772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.425792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.425800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.435270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.435290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.875 [2024-12-11 10:05:46.435297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.875 [2024-12-11 10:05:46.443432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:36.875 [2024-12-11 10:05:46.443451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.876 [2024-12-11 10:05:46.443458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.135 [2024-12-11 10:05:46.452995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.135 [2024-12-11 10:05:46.453015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.135 [2024-12-11 10:05:46.453023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.135 [2024-12-11 10:05:46.463232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.135 [2024-12-11 10:05:46.463255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.135 [2024-12-11 10:05:46.463263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.135 [2024-12-11 10:05:46.473164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.135 [2024-12-11 10:05:46.473183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.135 [2024-12-11 10:05:46.473191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.135 [2024-12-11 10:05:46.481276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.135 [2024-12-11 10:05:46.481294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.135 [2024-12-11 10:05:46.481302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.135 [2024-12-11 10:05:46.491269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.135 [2024-12-11 10:05:46.491289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.135 [2024-12-11 10:05:46.491297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.135 [2024-12-11 10:05:46.502342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.135 [2024-12-11 10:05:46.502362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.135 [2024-12-11 10:05:46.502370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.135 [2024-12-11 10:05:46.512619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.135 [2024-12-11 10:05:46.512639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.135 [2024-12-11 10:05:46.512646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.135 [2024-12-11 10:05:46.520825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.135 [2024-12-11 10:05:46.520844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.135 [2024-12-11 10:05:46.520851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.135 [2024-12-11 10:05:46.530229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.135 [2024-12-11 10:05:46.530249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.136 [2024-12-11 10:05:46.530256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.136 [2024-12-11 10:05:46.541207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.136 [2024-12-11 10:05:46.541233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.136 [2024-12-11 10:05:46.541241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.136 [2024-12-11 10:05:46.551610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.136 [2024-12-11 10:05:46.551629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.136 [2024-12-11 10:05:46.551637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.136 [2024-12-11 10:05:46.559788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.136 [2024-12-11 10:05:46.559807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.136 [2024-12-11 10:05:46.559815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.136 [2024-12-11 10:05:46.571607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.136 [2024-12-11 10:05:46.571627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.136 [2024-12-11 10:05:46.571635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.136 [2024-12-11 10:05:46.583110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.136 [2024-12-11 10:05:46.583130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.136 [2024-12-11 10:05:46.583138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.136 [2024-12-11 10:05:46.592461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.136 [2024-12-11 10:05:46.592482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.136 [2024-12-11 10:05:46.592490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.136 [2024-12-11 10:05:46.600922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.136 [2024-12-11 10:05:46.600942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.136 [2024-12-11 10:05:46.600950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.136 [2024-12-11 10:05:46.611695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.136 [2024-12-11 10:05:46.611714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.136 [2024-12-11 10:05:46.611722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.136 [2024-12-11 10:05:46.621975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.136 [2024-12-11 10:05:46.621994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.136 [2024-12-11 10:05:46.622002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.136 [2024-12-11 10:05:46.630532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.136 [2024-12-11 10:05:46.630551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.136 [2024-12-11 10:05:46.630563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.136 [2024-12-11 10:05:46.640717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.136 [2024-12-11 10:05:46.640737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.136 [2024-12-11 10:05:46.640746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.136 [2024-12-11 10:05:46.650575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.136 [2024-12-11 10:05:46.650594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.136 [2024-12-11 10:05:46.650602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.136 [2024-12-11 10:05:46.659708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.136 [2024-12-11 10:05:46.659727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.136 [2024-12-11 10:05:46.659735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.136 [2024-12-11 10:05:46.667807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.136 [2024-12-11 10:05:46.667826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.136 [2024-12-11 10:05:46.667834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.136 [2024-12-11 10:05:46.677351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.136 [2024-12-11 10:05:46.677370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.136 [2024-12-11 10:05:46.677377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.136 [2024-12-11 10:05:46.687206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.136 [2024-12-11 10:05:46.687233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.136 [2024-12-11 10:05:46.687241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.136 25839.00 IOPS, 100.93 MiB/s [2024-12-11T09:05:46.711Z] [2024-12-11 10:05:46.696887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1740890) 00:27:37.136 [2024-12-11 10:05:46.696907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.136 [2024-12-11 10:05:46.696914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.395 00:27:37.395 Latency(us) 00:27:37.395 [2024-12-11T09:05:46.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.395 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:37.395 nvme0n1 : 2.05 25319.25 98.90 0.00 0.00 4949.53 2652.65 47934.90 00:27:37.395 [2024-12-11T09:05:46.970Z] =================================================================================================================== 00:27:37.395 [2024-12-11T09:05:46.970Z] Total : 25319.25 98.90 0.00 0.00 4949.53 2652.65 47934.90 00:27:37.395 { 00:27:37.395 "results": [ 00:27:37.395 { 00:27:37.395 "job": "nvme0n1", 00:27:37.395 "core_mask": "0x2", 00:27:37.395 "workload": "randread", 00:27:37.395 "status": "finished", 00:27:37.395 "queue_depth": 128, 00:27:37.395 "io_size": 4096, 00:27:37.395 "runtime": 2.045282, 00:27:37.395 "iops": 25319.24693025216, 00:27:37.395 "mibps": 98.9033083212975, 00:27:37.395 "io_failed": 0, 00:27:37.395 "io_timeout": 0, 00:27:37.395 "avg_latency_us": 4949.526723697338, 00:27:37.395 "min_latency_us": 2652.647619047619, 00:27:37.395 "max_latency_us": 47934.90285714286 00:27:37.395 } 00:27:37.395 ], 00:27:37.395 "core_count": 1 00:27:37.395 } 00:27:37.395 10:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:37.395 10:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:37.395 10:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:37.395 | .driver_specific 00:27:37.395 | .nvme_error 00:27:37.395 | .status_code 00:27:37.395 | .command_transient_transport_error' 00:27:37.395 10:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:37.395 10:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 203 > 0 )) 00:27:37.395 10:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 236852 00:27:37.395 10:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 236852 ']' 00:27:37.395 10:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 236852 00:27:37.395 10:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:37.654 10:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:37.654 10:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 236852 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 236852' 00:27:37.654 killing process with pid 236852 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 236852 00:27:37.654 Received shutdown signal, test time was about 2.000000 seconds 00:27:37.654 00:27:37.654 Latency(us) 00:27:37.654 [2024-12-11T09:05:47.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.654 [2024-12-11T09:05:47.229Z] =================================================================================================================== 00:27:37.654 [2024-12-11T09:05:47.229Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 236852 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=237429 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 237429 /var/tmp/bperf.sock 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 237429 ']' 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:37.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:37.654 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:37.654 [2024-12-11 10:05:47.214893] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:27:37.654 [2024-12-11 10:05:47.214945] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237429 ] 00:27:37.654 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:37.654 Zero copy mechanism will not be used. 00:27:37.914 [2024-12-11 10:05:47.298374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.914 [2024-12-11 10:05:47.338553] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.914 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:37.914 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:37.914 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:37.914 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:38.173 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:38.173 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.173 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:38.173 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.173 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:38.173 10:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:38.431 nvme0n1 00:27:38.691 10:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:38.691 10:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.691 10:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:38.691 10:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.691 10:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:38.691 10:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:38.691 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:38.691 Zero copy mechanism will not be used. 00:27:38.691 Running I/O for 2 seconds... 00:27:38.691 [2024-12-11 10:05:48.125960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.691 [2024-12-11 10:05:48.125993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.691 [2024-12-11 10:05:48.126003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.691 [2024-12-11 10:05:48.131199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.691 [2024-12-11 10:05:48.131234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.691 [2024-12-11 10:05:48.131243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.691 [2024-12-11 10:05:48.136375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.691 [2024-12-11 10:05:48.136397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.691 [2024-12-11 10:05:48.136405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.691 [2024-12-11 10:05:48.141505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.691 [2024-12-11 10:05:48.141526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.691 [2024-12-11 10:05:48.141535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.691 [2024-12-11 10:05:48.146616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.691 [2024-12-11 10:05:48.146637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.691 [2024-12-11 10:05:48.146645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.691 [2024-12-11 10:05:48.151719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.691 [2024-12-11 10:05:48.151740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.691 [2024-12-11 10:05:48.151748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.691 [2024-12-11 10:05:48.156894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.691 [2024-12-11 10:05:48.156920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.691 [2024-12-11 10:05:48.156928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.162010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.162031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.162039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.166924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.166945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.166957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.172140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.172162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.172169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.177387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.177408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.177417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.182645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.182665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.182673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.187870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.187892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.187901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.193337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.193358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.193367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.198871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.198892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.198900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.204309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.204330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.204338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.209654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.209677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.209685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.214987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.215012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.215021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.220409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.220436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.220444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.225693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.225718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.225727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.231129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.231150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.231159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.236360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.236382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.236391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.241609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.241631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.241640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.246809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.246830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.246838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.251989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.252011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.252019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.257270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.257291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.257300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.692 [2024-12-11 10:05:48.262577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.692 [2024-12-11 10:05:48.262598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.692 [2024-12-11 10:05:48.262606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.952 [2024-12-11 10:05:48.267541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.952 [2024-12-11 10:05:48.267562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.952 [2024-12-11 10:05:48.267570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.952 [2024-12-11 10:05:48.272717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.952 [2024-12-11 10:05:48.272738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.952 [2024-12-11 10:05:48.272746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.952 [2024-12-11 10:05:48.277894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.277916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.277924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.282969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.282990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.282998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.288128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.288148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.288156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.293306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.293328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.293335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.298440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.298461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.298469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.303553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.303577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.303585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.308698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.308720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.308728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.313838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.313860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.313868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.319001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.319022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.319030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.324116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.324137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.324145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.329212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.329240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.329248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.334602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.334624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.334632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.339764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.339785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.339793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.344259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.344279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.344287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.347410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.347429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.347437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.352536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.352557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.352565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.357359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.357380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.357389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.362210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.362237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.362244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.367172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.367194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.367202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.372316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.372336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.372344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.377192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.377213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.377228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.382347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.382370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.382378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.387607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.387628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.387640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.392787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.392809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.392817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.397941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.397963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.397971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.403163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.403186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.403194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.408565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.408588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.953 [2024-12-11 10:05:48.408597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.953 [2024-12-11 10:05:48.414284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.953 [2024-12-11 10:05:48.414306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.414314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.418776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.418798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.418806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.421906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.421926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.421934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.427005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.427025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.427033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.432091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.432115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.432123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.437775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.437795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.437803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.442283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.442302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.442311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.447335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.447355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.447363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.452426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.452446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.452454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.457503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.457523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.457531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.462623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.462644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.462652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.467701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.467721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.467729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.472830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.472850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.472858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.477915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.477935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.477942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.483036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.483057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.483064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.488072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.488092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.488100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.493190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.493210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.493224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.498311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.498331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.498339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.503513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.503532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.503540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.508631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.508651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.508659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.513810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.513830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.513838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.518985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.519005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.519017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:38.954 [2024-12-11 10:05:48.524123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:38.954 [2024-12-11 10:05:48.524143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.954 [2024-12-11 10:05:48.524151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.529312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.529333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.529341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.535072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.535092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.535100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.539590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.539610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.539617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.544715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.544734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.544742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.549818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.549838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.549845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.554947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.554967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.554975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.560086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.560106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.560114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.565136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.565156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.565164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.570282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.570302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.570310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.575313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.575335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.575343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.580313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.580333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.580340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.585340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.585360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.585370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.590390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.590411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.590419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.595445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.595464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.595472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.600573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.600594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.600602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.605695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.605715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.605726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.610757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.610783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.610790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.615778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.615798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.615807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.620850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.620870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.620878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.625897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.625917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.625925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.630889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.630909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.630917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.636071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.636092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.636100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.641246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.641267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.641275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.215 [2024-12-11 10:05:48.646436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.215 [2024-12-11 10:05:48.646456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.215 [2024-12-11 10:05:48.646464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.651600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.651624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.651632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.656769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.656788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.656796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.661845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.661865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.661873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.667015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.667034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.667042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.672143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.672163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.672171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.678027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.678049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.678057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.685853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.685873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.685881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.693542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.693563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.693571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.700286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.700307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.700316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.707740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.707761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.707769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.715070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.715091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.715099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.723634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.723656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.723664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.730884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.730905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.730913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.737438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.737458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.737467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.744978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.744999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.745006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.752828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.752849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.752857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.760021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.760043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.760051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.767428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.767453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.767461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.775632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.775654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.775662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.216 [2024-12-11 10:05:48.783034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.216 [2024-12-11 10:05:48.783056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.216 [2024-12-11 10:05:48.783064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.476 [2024-12-11 10:05:48.789883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.476 [2024-12-11 10:05:48.789905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.476 [2024-12-11 10:05:48.789913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.476 [2024-12-11 10:05:48.794755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.476 [2024-12-11 10:05:48.794776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.476 [2024-12-11 10:05:48.794784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.476 [2024-12-11 10:05:48.799895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.476 [2024-12-11 10:05:48.799916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.799924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.805181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.805202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.805210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.810460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.810481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.810488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.815783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.815803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.815811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.821081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.821102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.821110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.826266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.826285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.826293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.831554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.831575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.831583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.836970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.836991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.836998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.842375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.842396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.842403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.847723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.847743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.847751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.853067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.853089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.853096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.858339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.858359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.858367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.863589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.863610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.863621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.869034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.869055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.869062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.874299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.874319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.874327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.879627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.879647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.879655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.884946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.884967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.884974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.890290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.890311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.890319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.895495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.895516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.895524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.900991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.901013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.901021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.906353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.906374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.906381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.911940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.911965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.911973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.917297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.917318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.917325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.922580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.922601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.922608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.927892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.927914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.927921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.933554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.933575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.933583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.940894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.940915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.477 [2024-12-11 10:05:48.940923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.477 [2024-12-11 10:05:48.948095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.477 [2024-12-11 10:05:48.948117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.478 [2024-12-11 10:05:48.948125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.478 [2024-12-11 10:05:48.954297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.478 [2024-12-11 10:05:48.954318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.478 [2024-12-11 10:05:48.954327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.478 [2024-12-11 10:05:48.960387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.478 [2024-12-11 10:05:48.960408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.478 [2024-12-11 10:05:48.960416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.478 [2024-12-11 10:05:48.966589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.478 [2024-12-11 10:05:48.966610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.478 [2024-12-11 10:05:48.966618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.478 [2024-12-11 10:05:48.972569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.478 [2024-12-11 10:05:48.972591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.478 [2024-12-11 10:05:48.972599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.478 [2024-12-11 10:05:48.978704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.478 [2024-12-11 10:05:48.978726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.478 [2024-12-11 10:05:48.978734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.478 [2024-12-11 10:05:48.984375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.478 [2024-12-11 10:05:48.984396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.478 [2024-12-11 10:05:48.984404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.478 [2024-12-11 10:05:48.989720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.478 [2024-12-11 10:05:48.989741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.478 [2024-12-11 10:05:48.989749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.478 [2024-12-11 10:05:48.995465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.478 [2024-12-11 10:05:48.995486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.478 [2024-12-11 10:05:48.995494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.478 [2024-12-11 10:05:49.001991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.478 [2024-12-11 10:05:49.002012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.478 [2024-12-11 10:05:49.002020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.478 [2024-12-11 10:05:49.009680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.478 [2024-12-11 10:05:49.009701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.478 [2024-12-11 10:05:49.009709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.478 [2024-12-11 10:05:49.016125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.478 [2024-12-11 10:05:49.016150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.478 [2024-12-11 10:05:49.016158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.478 [2024-12-11 10:05:49.022049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.478 [2024-12-11 10:05:49.022070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.478 [2024-12-11 10:05:49.022078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.478 [2024-12-11 10:05:49.028366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.478 [2024-12-11 10:05:49.028387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.478 [2024-12-11 10:05:49.028395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.478 [2024-12-11 10:05:49.033628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.478 [2024-12-11 10:05:49.033649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.478 [2024-12-11 10:05:49.033657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.478 [2024-12-11 10:05:49.038858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.478 [2024-12-11 10:05:49.038879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.478 [2024-12-11 10:05:49.038886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.478 [2024-12-11 10:05:49.044549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.478 [2024-12-11 10:05:49.044571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.478 [2024-12-11 10:05:49.044579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.738 [2024-12-11 10:05:49.051349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.738 [2024-12-11 10:05:49.051370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.738 [2024-12-11 10:05:49.051378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.738 [2024-12-11 10:05:49.058872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.738 [2024-12-11 10:05:49.058893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.738 [2024-12-11 10:05:49.058901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.738 [2024-12-11 10:05:49.065647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.738 [2024-12-11 10:05:49.065668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.738 [2024-12-11 10:05:49.065676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.738 [2024-12-11 10:05:49.072095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.738 [2024-12-11 10:05:49.072116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.738 [2024-12-11 10:05:49.072124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.738 [2024-12-11 10:05:49.078512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.738 [2024-12-11 10:05:49.078533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.738 [2024-12-11 10:05:49.078541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.738 [2024-12-11 10:05:49.084936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.738 [2024-12-11 10:05:49.084958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.738 [2024-12-11 10:05:49.084966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.738 [2024-12-11 10:05:49.092276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.738 [2024-12-11 10:05:49.092297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.738 [2024-12-11 10:05:49.092305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.738 [2024-12-11 10:05:49.098604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.738 [2024-12-11 10:05:49.098625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.738 [2024-12-11 10:05:49.098634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.738 [2024-12-11 10:05:49.105577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.738 [2024-12-11 10:05:49.105599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.738 [2024-12-11 10:05:49.105607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.738 [2024-12-11 10:05:49.112107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.738 [2024-12-11 10:05:49.112128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.738 [2024-12-11 10:05:49.112137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.738 [2024-12-11 10:05:49.118787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.738 [2024-12-11 10:05:49.118808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.738 [2024-12-11 10:05:49.118816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.738 5585.00 IOPS, 698.12 MiB/s [2024-12-11T09:05:49.313Z] [2024-12-11 10:05:49.125803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.738 [2024-12-11 10:05:49.125824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.738 [2024-12-11 10:05:49.125836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.738 [2024-12-11 10:05:49.132881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.738 [2024-12-11 10:05:49.132903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.738 [2024-12-11 10:05:49.132911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.738 [2024-12-11 10:05:49.140479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.738 [2024-12-11 10:05:49.140501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.738 [2024-12-11 10:05:49.140509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.738 [2024-12-11 10:05:49.147911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.738 [2024-12-11 10:05:49.147933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.738 [2024-12-11 10:05:49.147942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.738 [2024-12-11 10:05:49.156120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.738 [2024-12-11 10:05:49.156141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.738 [2024-12-11 10:05:49.156150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.738 [2024-12-11 10:05:49.164567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.164589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.164597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.171808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.171830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.171838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.179038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.179060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.179068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.185568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.185590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.185598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.192771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.192797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.192806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.199361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.199382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.199390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.204812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.204833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.204841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.210454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.210475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.210483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.216038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.216058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.216066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.221368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.221389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.221398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.226842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.226863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.226872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.230145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.230167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.230176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.234313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.234333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.234344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.239688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.239709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.239717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.244833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.244854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.244862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.249814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.249835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.249843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.255289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.255310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.255318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.260756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.260777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.260784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.265909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.265930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.265938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.271112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.271133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.271141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.276341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.276361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.276370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.281714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.281740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.281748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.286991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.287011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.287019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.292442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.292463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.292471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.297883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.297904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.297912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.303379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.303401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.303410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:39.739 [2024-12-11 10:05:49.308610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.739 [2024-12-11 10:05:49.308631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.739 [2024-12-11 10:05:49.308639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:39.999 [2024-12-11 10:05:49.313761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.999 [2024-12-11 10:05:49.313783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.999 [2024-12-11 10:05:49.313791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:39.999 [2024-12-11 10:05:49.319001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:39.999 [2024-12-11 10:05:49.319022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.999 [2024-12-11 10:05:49.319030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.324194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.324215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.324229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.329472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.329493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.329501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.335117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.335138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.335145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.340324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.340344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.340352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.345550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.345572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.345580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.351036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.351057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.351065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.356378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.356399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.356407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.361652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.361673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.361681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.366858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.366879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.366887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.372213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.372240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.372251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.377531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.377551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.377559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.382991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.383012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.383019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.388317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.388337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.388345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.393738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.393759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.393767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.399058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.399078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.399086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.404437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.404458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.404465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.409997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.410018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.410026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.415403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.415424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.415432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.420750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.420775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.420783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.426387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.426407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.426415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.431936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.431957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.431965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.437417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.437438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.437445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.442762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.442783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.442791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.448005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.448026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.448033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.453263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.453283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.453291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.458323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.458344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.458351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.463587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.000 [2024-12-11 10:05:49.463607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.000 [2024-12-11 10:05:49.463615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.000 [2024-12-11 10:05:49.468787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.001 [2024-12-11 10:05:49.468808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.001 [2024-12-11 10:05:49.468816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.001 [2024-12-11 10:05:49.474152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.001 [2024-12-11 10:05:49.474173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.001 [2024-12-11 10:05:49.474181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.001 [2024-12-11 10:05:49.480051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.001 [2024-12-11 10:05:49.480072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.001 [2024-12-11 10:05:49.480080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.001 [2024-12-11 10:05:49.485506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.001 [2024-12-11 10:05:49.485527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.001 [2024-12-11 10:05:49.485534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.001 [2024-12-11 10:05:49.491541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.001 [2024-12-11 10:05:49.491562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.001 [2024-12-11 10:05:49.491570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.001 [2024-12-11 10:05:49.497060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.001 [2024-12-11 10:05:49.497080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.001 [2024-12-11 10:05:49.497089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.001 [2024-12-11 10:05:49.502293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.001 [2024-12-11 10:05:49.502313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.001 [2024-12-11 10:05:49.502321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.001 [2024-12-11 10:05:49.507597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.001 [2024-12-11 10:05:49.507617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.001 [2024-12-11 10:05:49.507625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.001 [2024-12-11 10:05:49.513975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.001 [2024-12-11 10:05:49.513997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.001 [2024-12-11 10:05:49.514008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.001 [2024-12-11 10:05:49.521959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.001 [2024-12-11 10:05:49.521981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.001 [2024-12-11 10:05:49.521989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.001 [2024-12-11 10:05:49.529720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.001 [2024-12-11 10:05:49.529741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.001 [2024-12-11 10:05:49.529751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.001 [2024-12-11 10:05:49.536361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.001 [2024-12-11 10:05:49.536384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.001 [2024-12-11 10:05:49.536393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.001 [2024-12-11 10:05:49.542562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.001 [2024-12-11 10:05:49.542584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.001 [2024-12-11 10:05:49.542592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.001 [2024-12-11 10:05:49.549925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.001 [2024-12-11 10:05:49.549946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.001 [2024-12-11 10:05:49.549955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.001 [2024-12-11 10:05:49.557554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.001 [2024-12-11 10:05:49.557576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.001 [2024-12-11 10:05:49.557584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.001 [2024-12-11 10:05:49.565439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.001 [2024-12-11 10:05:49.565461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.001 [2024-12-11 10:05:49.565469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.573111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.573134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.573142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.580594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.580615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.580623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.587459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.587480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.587488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.595300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.595322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.595330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.603272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.603293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.603301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.610657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.610678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.610686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.618712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.618734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.618742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.626669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.626690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.626699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.634448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.634469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.634477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.642635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.642658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.642670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.650523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.650545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.650553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.658437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.658459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.658468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.666150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.666172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.666181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.673736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.673759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.673766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.681097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.681119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.681127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.689115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.689137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.689145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.697310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.697332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.697340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.704184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.704206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.704214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.710651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.710676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.710684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.717922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.717944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.717952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.726008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.726030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.726038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.261 [2024-12-11 10:05:49.733409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.261 [2024-12-11 10:05:49.733430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.261 [2024-12-11 10:05:49.733438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.262 [2024-12-11 10:05:49.741048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.262 [2024-12-11 10:05:49.741069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.262 [2024-12-11 10:05:49.741077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.262 [2024-12-11 10:05:49.748544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.262 [2024-12-11 10:05:49.748566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.262 [2024-12-11 10:05:49.748574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.262 [2024-12-11 10:05:49.756249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.262 [2024-12-11 10:05:49.756271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.262 [2024-12-11 10:05:49.756279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.262 [2024-12-11 10:05:49.764297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.262 [2024-12-11 10:05:49.764319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.262 [2024-12-11 10:05:49.764327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.262 [2024-12-11 10:05:49.771947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.262 [2024-12-11 10:05:49.771968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.262 [2024-12-11 10:05:49.771976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.262 [2024-12-11 10:05:49.779279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.262 [2024-12-11 10:05:49.779302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.262 [2024-12-11 10:05:49.779311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.262 [2024-12-11 10:05:49.786622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.262 [2024-12-11 10:05:49.786644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.262 [2024-12-11 10:05:49.786653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.262 [2024-12-11 10:05:49.794291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.262 [2024-12-11 10:05:49.794314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.262 [2024-12-11 10:05:49.794323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.262 [2024-12-11 10:05:49.800785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.262 [2024-12-11 10:05:49.800807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.262 [2024-12-11 10:05:49.800815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.262 [2024-12-11 10:05:49.806647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.262 [2024-12-11 10:05:49.806669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.262 [2024-12-11 10:05:49.806677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.262 [2024-12-11 10:05:49.813005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.262 [2024-12-11 10:05:49.813027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.262 [2024-12-11 10:05:49.813034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.262 [2024-12-11 10:05:49.819413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.262 [2024-12-11 10:05:49.819436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.262 [2024-12-11 10:05:49.819444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.262 [2024-12-11 10:05:49.825783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.262 [2024-12-11 10:05:49.825806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.262 [2024-12-11 10:05:49.825814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.262 [2024-12-11 10:05:49.831702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.262 [2024-12-11 10:05:49.831725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.262 [2024-12-11 10:05:49.831737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.839522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.839544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.839552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.846871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.846893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.846901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.853546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.853567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.853576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.858928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.858949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.858958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.864759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.864781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.864789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.870409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.870430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.870438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.875940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.875961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.875969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.881215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.881242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.881250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.886441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.886469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.886477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.890016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.890036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.890044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.894408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.894429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.894437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.899511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.899532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.899541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.904618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.904638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.904647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.909711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.909732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.909740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.914880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.914902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.914910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.920203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.920230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.920238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.925540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.925561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.925572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.931056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.931077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.931085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.936268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.936289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.936297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.941605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.941627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.941634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.946804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.946825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.946833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.952158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.952179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.952187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.957082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.957103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.957111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.961954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.961976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.961984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.966982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.967003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.967011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.972107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.972133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.972141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.977102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.977124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.977132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.982208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.982235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.982244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.987328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.987349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.987357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.992403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.992425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.992432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:49.997612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.522 [2024-12-11 10:05:49.997632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.522 [2024-12-11 10:05:49.997640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.522 [2024-12-11 10:05:50.002903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.523 [2024-12-11 10:05:50.002926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.523 [2024-12-11 10:05:50.002934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.523 [2024-12-11 10:05:50.009152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.523 [2024-12-11 10:05:50.009177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.523 [2024-12-11 10:05:50.009188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.523 [2024-12-11 10:05:50.014562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.523 [2024-12-11 10:05:50.014584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.523 [2024-12-11 10:05:50.014593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.523 [2024-12-11 10:05:50.019547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.523 [2024-12-11 10:05:50.019569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.523 [2024-12-11 10:05:50.019578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.523 [2024-12-11 10:05:50.024794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.523 [2024-12-11 10:05:50.024816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.523 [2024-12-11 10:05:50.024825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.523 [2024-12-11 10:05:50.030105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.523 [2024-12-11 10:05:50.030127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.523 [2024-12-11 10:05:50.030135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.523 [2024-12-11 10:05:50.035686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.523 [2024-12-11 10:05:50.035710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.523 [2024-12-11 10:05:50.035719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.523 [2024-12-11 10:05:50.042398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.523 [2024-12-11 10:05:50.042421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.523 [2024-12-11 10:05:50.042429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.523 [2024-12-11 10:05:50.047780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.523 [2024-12-11 10:05:50.047802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.523 [2024-12-11 10:05:50.047810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.523 [2024-12-11 10:05:50.053140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.523 [2024-12-11 10:05:50.053162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.523 [2024-12-11 10:05:50.053170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.523 [2024-12-11 10:05:50.058607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.523 [2024-12-11 10:05:50.058628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.523 [2024-12-11 10:05:50.058637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.523 [2024-12-11 10:05:50.063992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.523 [2024-12-11 10:05:50.064014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.523 [2024-12-11 10:05:50.064027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.523 [2024-12-11 10:05:50.069681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.523 [2024-12-11 10:05:50.069702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.523 [2024-12-11 10:05:50.069710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.523 [2024-12-11 10:05:50.075092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.523 [2024-12-11 10:05:50.075114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.523 [2024-12-11 10:05:50.075121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.523 [2024-12-11 10:05:50.081262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.523 [2024-12-11 10:05:50.081286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.523 [2024-12-11 10:05:50.081294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.523 [2024-12-11 10:05:50.086715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.523 [2024-12-11 10:05:50.086737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.523 [2024-12-11 10:05:50.086745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.523 [2024-12-11 10:05:50.092374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.523 [2024-12-11 10:05:50.092395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.523 [2024-12-11 10:05:50.092403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.782 [2024-12-11 10:05:50.097720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.782 [2024-12-11 10:05:50.097742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.782 [2024-12-11 10:05:50.097750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.782 [2024-12-11 10:05:50.103194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.782 [2024-12-11 10:05:50.103214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.782 [2024-12-11 10:05:50.103230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.782 [2024-12-11 10:05:50.108981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.782 [2024-12-11 10:05:50.109003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.782 [2024-12-11 10:05:50.109011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:40.782 [2024-12-11 10:05:50.114281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.782 [2024-12-11 10:05:50.114307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.782 [2024-12-11 10:05:50.114316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:40.782 [2024-12-11 10:05:50.119494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.782 [2024-12-11 10:05:50.119515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.782 [2024-12-11 10:05:50.119524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:40.782 [2024-12-11 10:05:50.124592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15428e0) 00:27:40.782 [2024-12-11 10:05:50.124613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.782 [2024-12-11 10:05:50.124621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:40.782 5369.00 IOPS, 671.12 MiB/s 00:27:40.782 Latency(us) 00:27:40.782 [2024-12-11T09:05:50.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.782 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:40.782 nvme0n1 : 2.00 5373.41 671.68 0.00 0.00 2974.41 639.76 8862.96 00:27:40.782 [2024-12-11T09:05:50.357Z] =================================================================================================================== 00:27:40.782 [2024-12-11T09:05:50.357Z] Total : 5373.41 671.68 0.00 0.00 2974.41 639.76 8862.96 00:27:40.782 { 00:27:40.782 "results": [ 00:27:40.782 { 00:27:40.782 "job": "nvme0n1", 00:27:40.782 "core_mask": "0x2", 00:27:40.782 "workload": "randread", 00:27:40.782 "status": "finished", 00:27:40.782 "queue_depth": 16, 00:27:40.782 "io_size": 131072, 00:27:40.782 "runtime": 2.003382, 00:27:40.782 "iops": 5373.413557673974, 00:27:40.782 "mibps": 671.6766947092467, 00:27:40.782 "io_failed": 0, 00:27:40.782 "io_timeout": 0, 00:27:40.782 "avg_latency_us": 2974.408788976622, 00:27:40.782 "min_latency_us": 639.7561904761905, 00:27:40.782 "max_latency_us": 8862.96380952381 00:27:40.782 } 00:27:40.782 ], 00:27:40.782 "core_count": 1 00:27:40.782 } 00:27:40.782 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:40.782 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:40.782 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:40.782 | .driver_specific 00:27:40.782 | .nvme_error 00:27:40.782 | .status_code 00:27:40.782 | .command_transient_transport_error' 00:27:40.782 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:40.782 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 347 > 0 )) 00:27:40.782 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 237429 00:27:40.782 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 237429 ']' 00:27:40.782 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 237429 00:27:40.782 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:41.041 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:41.041 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 237429 00:27:41.041 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:41.041 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:41.041 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 237429' 00:27:41.041 killing process with pid 237429 00:27:41.041 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 237429 00:27:41.041 Received shutdown signal, test time was about 2.000000 seconds 00:27:41.041 00:27:41.041 Latency(us) 00:27:41.041 [2024-12-11T09:05:50.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.041 [2024-12-11T09:05:50.616Z] =================================================================================================================== 00:27:41.041 [2024-12-11T09:05:50.616Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:41.041 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 237429 00:27:41.041 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:41.041 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:41.041 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:41.041 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:41.041 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:41.042 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=237998 00:27:41.042 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 237998 /var/tmp/bperf.sock 00:27:41.042 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:41.042 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 237998 ']' 00:27:41.042 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:41.042 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.042 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:41.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:41.042 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.042 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.042 [2024-12-11 10:05:50.612391] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:27:41.042 [2024-12-11 10:05:50.612439] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237998 ] 00:27:41.300 [2024-12-11 10:05:50.693811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.300 [2024-12-11 10:05:50.732855] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.300 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.300 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:41.300 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:41.300 10:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:41.558 10:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:41.558 10:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.558 10:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.558 10:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.558 10:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:41.558 10:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:41.818 nvme0n1 00:27:41.818 10:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:41.818 10:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.818 10:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.818 10:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.818 10:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:41.818 10:05:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:41.818 Running I/O for 2 seconds... 00:27:42.077 [2024-12-11 10:05:51.399280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016edf118 00:27:42.077 [2024-12-11 10:05:51.400628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.077 [2024-12-11 10:05:51.400656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.077 [2024-12-11 10:05:51.407809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee95a0 00:27:42.077 [2024-12-11 10:05:51.409020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.077 [2024-12-11 10:05:51.409043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:42.077 [2024-12-11 10:05:51.417492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eef6a8 00:27:42.077 [2024-12-11 10:05:51.418481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.077 [2024-12-11 10:05:51.418501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.077 [2024-12-11 10:05:51.426112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee0ea0 00:27:42.077 [2024-12-11 10:05:51.427101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.077 [2024-12-11 10:05:51.427121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:42.077 [2024-12-11 10:05:51.435064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eedd58 00:27:42.077 [2024-12-11 10:05:51.436050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.077 [2024-12-11 10:05:51.436069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:42.077 [2024-12-11 10:05:51.444032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef7538 00:27:42.077 [2024-12-11 10:05:51.445112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.077 [2024-12-11 10:05:51.445131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:42.077 [2024-12-11 10:05:51.453462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efe2e8 00:27:42.077 [2024-12-11 10:05:51.454571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.077 [2024-12-11 10:05:51.454589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.462890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee27f0 00:27:42.078 [2024-12-11 10:05:51.464103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.464121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.472037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef35f0 00:27:42.078 [2024-12-11 10:05:51.473302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.473321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.479976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee3060 00:27:42.078 [2024-12-11 10:05:51.480751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.480769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.489599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efe2e8 00:27:42.078 [2024-12-11 10:05:51.490656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.490674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.498065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee5658 00:27:42.078 [2024-12-11 10:05:51.498696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.498715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.507025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efda78 00:27:42.078 [2024-12-11 10:05:51.507641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.507659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.515924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee5658 00:27:42.078 [2024-12-11 10:05:51.516535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.516554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.525010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef3e60 00:27:42.078 [2024-12-11 10:05:51.525525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.525544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.534476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efeb58 00:27:42.078 [2024-12-11 10:05:51.535074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.535092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.543737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efc560 00:27:42.078 [2024-12-11 10:05:51.544579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.544598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.552511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee8088 00:27:42.078 [2024-12-11 10:05:51.553454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.553473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.562634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee8088 00:27:42.078 [2024-12-11 10:05:51.564024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.564042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.572091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eed0b0 00:27:42.078 [2024-12-11 10:05:51.573572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.573590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.578437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef8e88 00:27:42.078 [2024-12-11 10:05:51.579107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.579125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.586977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eecc78 00:27:42.078 [2024-12-11 10:05:51.587566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.587584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.596884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eedd58 00:27:42.078 [2024-12-11 10:05:51.597570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.597593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.607172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef35f0 00:27:42.078 [2024-12-11 10:05:51.608223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.608242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.614349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee12d8 00:27:42.078 [2024-12-11 10:05:51.614904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.614921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.623763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee9e10 00:27:42.078 [2024-12-11 10:05:51.624452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.624471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.633420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee5220 00:27:42.078 [2024-12-11 10:05:51.634096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.078 [2024-12-11 10:05:51.634115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:42.078 [2024-12-11 10:05:51.643546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef7100 00:27:42.079 [2024-12-11 10:05:51.644729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.079 [2024-12-11 10:05:51.644747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:42.339 [2024-12-11 10:05:51.651971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efb048 00:27:42.339 [2024-12-11 10:05:51.653186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.339 [2024-12-11 10:05:51.653206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:42.339 [2024-12-11 10:05:51.662144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef6890 00:27:42.339 [2024-12-11 10:05:51.663276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.339 [2024-12-11 10:05:51.663295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:42.339 [2024-12-11 10:05:51.669654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efdeb0 00:27:42.339 [2024-12-11 10:05:51.670308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.339 [2024-12-11 10:05:51.670326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.339 [2024-12-11 10:05:51.678752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efcdd0 00:27:42.339 [2024-12-11 10:05:51.679379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.339 [2024-12-11 10:05:51.679396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.339 [2024-12-11 10:05:51.687753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee0ea0 00:27:42.339 [2024-12-11 10:05:51.688399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.339 [2024-12-11 10:05:51.688416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.339 [2024-12-11 10:05:51.696793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016edfdc0 00:27:42.339 [2024-12-11 10:05:51.697417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.339 [2024-12-11 10:05:51.697435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.339 [2024-12-11 10:05:51.705772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eef6a8 00:27:42.339 [2024-12-11 10:05:51.706416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.339 [2024-12-11 10:05:51.706434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.714781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee4140 00:27:42.340 [2024-12-11 10:05:51.715433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.715451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.723779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee5220 00:27:42.340 [2024-12-11 10:05:51.724422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.724441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.733973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef57b0 00:27:42.340 [2024-12-11 10:05:51.735052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.735070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.743364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eec840 00:27:42.340 [2024-12-11 10:05:51.744511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.744529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.751715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efb480 00:27:42.340 [2024-12-11 10:05:51.752583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.752600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.760814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016edf988 00:27:42.340 [2024-12-11 10:05:51.761491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.761510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.771177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efda78 00:27:42.340 [2024-12-11 10:05:51.772612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.772629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.777517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee6b70 00:27:42.340 [2024-12-11 10:05:51.778137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.778155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.786472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee6fa8 00:27:42.340 [2024-12-11 10:05:51.787188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.787206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.796539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eecc78 00:27:42.340 [2024-12-11 10:05:51.797401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.797419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.805852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efc128 00:27:42.340 [2024-12-11 10:05:51.806821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.806839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.814969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee7c50 00:27:42.340 [2024-12-11 10:05:51.815970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.815987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.823912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee8d30 00:27:42.340 [2024-12-11 10:05:51.824965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.824984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.832976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef5378 00:27:42.340 [2024-12-11 10:05:51.833991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.834013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.841989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef0ff8 00:27:42.340 [2024-12-11 10:05:51.842974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.842993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.850381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eddc00 00:27:42.340 [2024-12-11 10:05:51.851339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.851357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.858760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ede470 00:27:42.340 [2024-12-11 10:05:51.859381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.859399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.867617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eeaef0 00:27:42.340 [2024-12-11 10:05:51.868275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.868293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.876687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eebfd0 00:27:42.340 [2024-12-11 10:05:51.877299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.877317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.885680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eeaab8 00:27:42.340 [2024-12-11 10:05:51.886295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.886313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.894700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eff3c8 00:27:42.340 [2024-12-11 10:05:51.895340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.895357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.903715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef2948 00:27:42.340 [2024-12-11 10:05:51.904359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.340 [2024-12-11 10:05:51.904377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.340 [2024-12-11 10:05:51.912947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eeb328 00:27:42.600 [2024-12-11 10:05:51.913581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.600 [2024-12-11 10:05:51.913602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.600 [2024-12-11 10:05:51.922174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee6300 00:27:42.600 [2024-12-11 10:05:51.922815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.600 [2024-12-11 10:05:51.922833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.600 [2024-12-11 10:05:51.931249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efac10 00:27:42.600 [2024-12-11 10:05:51.931921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.600 [2024-12-11 10:05:51.931939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.600 [2024-12-11 10:05:51.941476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee2c28 00:27:42.600 [2024-12-11 10:05:51.942583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.600 [2024-12-11 10:05:51.942600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.600 [2024-12-11 10:05:51.950599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef1430 00:27:42.600 [2024-12-11 10:05:51.951698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.600 [2024-12-11 10:05:51.951715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:42.600 [2024-12-11 10:05:51.958802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016edfdc0 00:27:42.600 [2024-12-11 10:05:51.959632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.600 [2024-12-11 10:05:51.959650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:42.600 [2024-12-11 10:05:51.967918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eef6a8 00:27:42.600 [2024-12-11 10:05:51.968679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.600 [2024-12-11 10:05:51.968696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:42.600 [2024-12-11 10:05:51.976374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efc128 00:27:42.600 [2024-12-11 10:05:51.977011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.600 [2024-12-11 10:05:51.977028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:42.600 [2024-12-11 10:05:51.985645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee6300 00:27:42.601 [2024-12-11 10:05:51.986273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:51.986291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:51.994690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eeb328 00:27:42.601 [2024-12-11 10:05:51.995357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:51.995376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.003747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef2948 00:27:42.601 [2024-12-11 10:05:52.004427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.004445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.013998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef57b0 00:27:42.601 [2024-12-11 10:05:52.015151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.015169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.023453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efc998 00:27:42.601 [2024-12-11 10:05:52.024677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.024695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.032568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee6fa8 00:27:42.601 [2024-12-11 10:05:52.033807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.033824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.040110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ede470 00:27:42.601 [2024-12-11 10:05:52.040555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.040574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.051481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee0a68 00:27:42.601 [2024-12-11 10:05:52.052963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.052980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.057971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef57b0 00:27:42.601 [2024-12-11 10:05:52.058752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.058770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.068866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee12d8 00:27:42.601 [2024-12-11 10:05:52.070018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.070037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.076512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee88f8 00:27:42.601 [2024-12-11 10:05:52.076942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.076960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.085023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef2510 00:27:42.601 [2024-12-11 10:05:52.085793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.085811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.094436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efda78 00:27:42.601 [2024-12-11 10:05:52.095327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.095346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.103600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef6020 00:27:42.601 [2024-12-11 10:05:52.104482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.104500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.113032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef0788 00:27:42.601 [2024-12-11 10:05:52.113943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.113962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.121466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef5378 00:27:42.601 [2024-12-11 10:05:52.122247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.122265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.130109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee7c50 00:27:42.601 [2024-12-11 10:05:52.130882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.130900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.139498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eff3c8 00:27:42.601 [2024-12-11 10:05:52.140392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.140410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.150654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee3060 00:27:42.601 [2024-12-11 10:05:52.152020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.152041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.158675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef5be8 00:27:42.601 [2024-12-11 10:05:52.159521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.159539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:42.601 [2024-12-11 10:05:52.168144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eeb328 00:27:42.601 [2024-12-11 10:05:52.168861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.601 [2024-12-11 10:05:52.168879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.177501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eeea00 00:27:42.861 [2024-12-11 10:05:52.178449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.178467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.186186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef4f40 00:27:42.861 [2024-12-11 10:05:52.187140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.187158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.195522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee73e0 00:27:42.861 [2024-12-11 10:05:52.196562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.196579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.204999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee49b0 00:27:42.861 [2024-12-11 10:05:52.206145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.206162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.214414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee8d30 00:27:42.861 [2024-12-11 10:05:52.215708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.215725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.223851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef6890 00:27:42.861 [2024-12-11 10:05:52.225270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.225288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.230513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eeaef0 00:27:42.861 [2024-12-11 10:05:52.231197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.231214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.241420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee49b0 00:27:42.861 [2024-12-11 10:05:52.242493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.242511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.249839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef1ca0 00:27:42.861 [2024-12-11 10:05:52.251073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.251090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.259307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efd208 00:27:42.861 [2024-12-11 10:05:52.260434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.260453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.268703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee4140 00:27:42.861 [2024-12-11 10:05:52.269952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.269969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.277848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eeaab8 00:27:42.861 [2024-12-11 10:05:52.279014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.279032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.286388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef6cc8 00:27:42.861 [2024-12-11 10:05:52.287545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.287562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.294903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef9f68 00:27:42.861 [2024-12-11 10:05:52.295972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.295990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.304183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee12d8 00:27:42.861 [2024-12-11 10:05:52.304889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.304907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.312656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eff3c8 00:27:42.861 [2024-12-11 10:05:52.313925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.313943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.322739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef0788 00:27:42.861 [2024-12-11 10:05:52.323803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.323821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.331144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee3498 00:27:42.861 [2024-12-11 10:05:52.332190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.332207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.339888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eef270 00:27:42.861 [2024-12-11 10:05:52.340797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.340815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.349408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef4f40 00:27:42.861 [2024-12-11 10:05:52.350559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.350577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.357724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef6cc8 00:27:42.861 [2024-12-11 10:05:52.358437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.861 [2024-12-11 10:05:52.358455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:42.861 [2024-12-11 10:05:52.366841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef3a28 00:27:42.862 [2024-12-11 10:05:52.367428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.862 [2024-12-11 10:05:52.367446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:42.862 [2024-12-11 10:05:52.375931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef5be8 00:27:42.862 [2024-12-11 10:05:52.376766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.862 [2024-12-11 10:05:52.376784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:42.862 [2024-12-11 10:05:52.385159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef0788 00:27:42.862 [2024-12-11 10:05:52.385858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.862 [2024-12-11 10:05:52.385880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:42.862 28060.00 IOPS, 109.61 MiB/s [2024-12-11T09:05:52.437Z] [2024-12-11 10:05:52.393681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eedd58 00:27:42.862 [2024-12-11 10:05:52.394704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.862 [2024-12-11 10:05:52.394722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:42.862 [2024-12-11 10:05:52.402065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efd208 00:27:42.862 [2024-12-11 10:05:52.402645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.862 [2024-12-11 10:05:52.402663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:42.862 [2024-12-11 10:05:52.411300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee9168 00:27:42.862 [2024-12-11 10:05:52.412115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.862 [2024-12-11 10:05:52.412133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:42.862 [2024-12-11 10:05:52.420878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eed920 00:27:42.862 [2024-12-11 10:05:52.421466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.862 [2024-12-11 10:05:52.421485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:42.862 [2024-12-11 10:05:52.429797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efb048 00:27:42.862 [2024-12-11 10:05:52.430669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.862 [2024-12-11 10:05:52.430688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:43.158 [2024-12-11 10:05:52.439043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eefae0 00:27:43.158 [2024-12-11 10:05:52.439759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.439777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.448829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eea680 00:27:43.159 [2024-12-11 10:05:52.449846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.449864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.457930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef8a50 00:27:43.159 [2024-12-11 10:05:52.458944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.458962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.466923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efb480 00:27:43.159 [2024-12-11 10:05:52.467503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.467521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.475706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eed0b0 00:27:43.159 [2024-12-11 10:05:52.476555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.476573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.484762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef7da8 00:27:43.159 [2024-12-11 10:05:52.485473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.485491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.494154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eed920 00:27:43.159 [2024-12-11 10:05:52.495206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.495228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.503361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef8e88 00:27:43.159 [2024-12-11 10:05:52.504446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.504464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.512833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eef6a8 00:27:43.159 [2024-12-11 10:05:52.513780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.513798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.521267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eed0b0 00:27:43.159 [2024-12-11 10:05:52.522181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.522200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.529910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef20d8 00:27:43.159 [2024-12-11 10:05:52.530735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.530753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.540595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef20d8 00:27:43.159 [2024-12-11 10:05:52.542045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.542062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.547378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efc128 00:27:43.159 [2024-12-11 10:05:52.548085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.548103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.558615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef6890 00:27:43.159 [2024-12-11 10:05:52.559694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.559712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.567302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef3a28 00:27:43.159 [2024-12-11 10:05:52.568337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.568355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.576366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eddc00 00:27:43.159 [2024-12-11 10:05:52.577351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.577369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.584578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eee190 00:27:43.159 [2024-12-11 10:05:52.585386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.585404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.593827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee99d8 00:27:43.159 [2024-12-11 10:05:52.594320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.594338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.603416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef1ca0 00:27:43.159 [2024-12-11 10:05:52.604027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.604045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.611991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef81e0 00:27:43.159 [2024-12-11 10:05:52.612938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.612955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.621204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee5658 00:27:43.159 [2024-12-11 10:05:52.621690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.621712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.630650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eec840 00:27:43.159 [2024-12-11 10:05:52.631249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.631267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.640469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee9168 00:27:43.159 [2024-12-11 10:05:52.641723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.641741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.648785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efeb58 00:27:43.159 [2024-12-11 10:05:52.649614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.649632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.658057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efdeb0 00:27:43.159 [2024-12-11 10:05:52.659215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.659238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.667138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee88f8 00:27:43.159 [2024-12-11 10:05:52.667902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.667920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.676679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efb8b8 00:27:43.159 [2024-12-11 10:05:52.677677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.677695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.686899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ede8a8 00:27:43.159 [2024-12-11 10:05:52.688450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.159 [2024-12-11 10:05:52.688468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:43.159 [2024-12-11 10:05:52.693454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee1f80 00:27:43.159 [2024-12-11 10:05:52.694243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.160 [2024-12-11 10:05:52.694262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.703642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee4de8 00:27:43.460 [2024-12-11 10:05:52.704476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.704496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.713993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efdeb0 00:27:43.460 [2024-12-11 10:05:52.715174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.715193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.722489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee1b48 00:27:43.460 [2024-12-11 10:05:52.723370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.723389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.731838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efda78 00:27:43.460 [2024-12-11 10:05:52.732709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.732728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.741662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efef90 00:27:43.460 [2024-12-11 10:05:52.742510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.742530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.750716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee1f80 00:27:43.460 [2024-12-11 10:05:52.751559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.751578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.759852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef1ca0 00:27:43.460 [2024-12-11 10:05:52.760679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.760698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.768604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef8a50 00:27:43.460 [2024-12-11 10:05:52.769461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.769479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.778930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee9e10 00:27:43.460 [2024-12-11 10:05:52.780085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.780103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.786487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef4b08 00:27:43.460 [2024-12-11 10:05:52.787071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.787089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.795844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eecc78 00:27:43.460 [2024-12-11 10:05:52.796667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.796685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.805302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efc560 00:27:43.460 [2024-12-11 10:05:52.806126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.806145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.815543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee0ea0 00:27:43.460 [2024-12-11 10:05:52.816861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.816879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.823360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee27f0 00:27:43.460 [2024-12-11 10:05:52.824175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.824193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.832809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef2948 00:27:43.460 [2024-12-11 10:05:52.833868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.833886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.840591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef31b8 00:27:43.460 [2024-12-11 10:05:52.841168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.841185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.849510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efe2e8 00:27:43.460 [2024-12-11 10:05:52.850078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.850096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.858501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efe2e8 00:27:43.460 [2024-12-11 10:05:52.859068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.859088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.867500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efe2e8 00:27:43.460 [2024-12-11 10:05:52.868071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.868089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.876549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efe2e8 00:27:43.460 [2024-12-11 10:05:52.877129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.877146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.885549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efe2e8 00:27:43.460 [2024-12-11 10:05:52.886115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.886133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.894822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efa3a0 00:27:43.460 [2024-12-11 10:05:52.895593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.895611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.904418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef6020 00:27:43.460 [2024-12-11 10:05:52.905456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.905474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.915613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efb048 00:27:43.460 [2024-12-11 10:05:52.917187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.917205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.922288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef4b08 00:27:43.460 [2024-12-11 10:05:52.923030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.460 [2024-12-11 10:05:52.923047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:43.460 [2024-12-11 10:05:52.932063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efc998 00:27:43.460 [2024-12-11 10:05:52.932813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.461 [2024-12-11 10:05:52.932831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:43.461 [2024-12-11 10:05:52.940583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee5a90 00:27:43.461 [2024-12-11 10:05:52.941214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.461 [2024-12-11 10:05:52.941240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:43.461 [2024-12-11 10:05:52.951510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef8e88 00:27:43.461 [2024-12-11 10:05:52.952583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.461 [2024-12-11 10:05:52.952601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:43.461 [2024-12-11 10:05:52.958897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef20d8 00:27:43.461 [2024-12-11 10:05:52.959512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.461 [2024-12-11 10:05:52.959529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:43.461 [2024-12-11 10:05:52.969212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ede8a8 00:27:43.461 [2024-12-11 10:05:52.970276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.461 [2024-12-11 10:05:52.970295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:43.461 [2024-12-11 10:05:52.977846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eed0b0 00:27:43.461 [2024-12-11 10:05:52.978959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.461 [2024-12-11 10:05:52.978978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:43.461 [2024-12-11 10:05:52.987024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee3d08 00:27:43.461 [2024-12-11 10:05:52.988153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.461 [2024-12-11 10:05:52.988171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:43.461 [2024-12-11 10:05:52.995730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eff3c8 00:27:43.461 [2024-12-11 10:05:52.996877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.461 [2024-12-11 10:05:52.996896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:43.461 [2024-12-11 10:05:53.005108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efda78 00:27:43.461 [2024-12-11 10:05:53.005952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.461 [2024-12-11 10:05:53.005971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:43.461 [2024-12-11 10:05:53.013601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef4b08 00:27:43.461 [2024-12-11 10:05:53.014434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.461 [2024-12-11 10:05:53.014452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:43.734 [2024-12-11 10:05:53.022970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef6458 00:27:43.734 [2024-12-11 10:05:53.023860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.734 [2024-12-11 10:05:53.023880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:43.734 [2024-12-11 10:05:53.032970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef6458 00:27:43.734 [2024-12-11 10:05:53.033798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.734 [2024-12-11 10:05:53.033816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:43.734 [2024-12-11 10:05:53.041650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef8618 00:27:43.734 [2024-12-11 10:05:53.042628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.734 [2024-12-11 10:05:53.042647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:43.734 [2024-12-11 10:05:53.050767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efef90 00:27:43.734 [2024-12-11 10:05:53.051548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.734 [2024-12-11 10:05:53.051567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:43.734 [2024-12-11 10:05:53.059860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee99d8 00:27:43.734 [2024-12-11 10:05:53.060326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.734 [2024-12-11 10:05:53.060345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:43.734 [2024-12-11 10:05:53.070740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee8088 00:27:43.734 [2024-12-11 10:05:53.072149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.734 [2024-12-11 10:05:53.072167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.077317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee1f80 00:27:43.735 [2024-12-11 10:05:53.077974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.077991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.088226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef6020 00:27:43.735 [2024-12-11 10:05:53.089247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.089265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.096839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef9f68 00:27:43.735 [2024-12-11 10:05:53.097875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.097894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.106238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efe2e8 00:27:43.735 [2024-12-11 10:05:53.107370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.107388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.115691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee3d08 00:27:43.735 [2024-12-11 10:05:53.116961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.116980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.124092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee73e0 00:27:43.735 [2024-12-11 10:05:53.124988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.125006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.133259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef81e0 00:27:43.735 [2024-12-11 10:05:53.133944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.133962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.141688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef8618 00:27:43.735 [2024-12-11 10:05:53.142894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.142912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.150002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee84c0 00:27:43.735 [2024-12-11 10:05:53.150650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.150668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.158973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eeaef0 00:27:43.735 [2024-12-11 10:05:53.159643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.159660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.167979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee6fa8 00:27:43.735 [2024-12-11 10:05:53.168673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.168691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.177213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eea248 00:27:43.735 [2024-12-11 10:05:53.177906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.177928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.186301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef0ff8 00:27:43.735 [2024-12-11 10:05:53.186989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.187007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.195379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef3a28 00:27:43.735 [2024-12-11 10:05:53.196089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.196107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.204462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eeea00 00:27:43.735 [2024-12-11 10:05:53.205155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.205173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.213493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef2d80 00:27:43.735 [2024-12-11 10:05:53.214149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.214167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.222521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efc998 00:27:43.735 [2024-12-11 10:05:53.223175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.223193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.231507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ede8a8 00:27:43.735 [2024-12-11 10:05:53.232166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.232184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.240557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee23b8 00:27:43.735 [2024-12-11 10:05:53.241212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.241234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.249500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efbcf0 00:27:43.735 [2024-12-11 10:05:53.250163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.250181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.258710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eea680 00:27:43.735 [2024-12-11 10:05:53.259407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.259426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.267797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eeff18 00:27:43.735 [2024-12-11 10:05:53.268468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.268487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.276787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eff3c8 00:27:43.735 [2024-12-11 10:05:53.277377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.277394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.285768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eee5c8 00:27:43.735 [2024-12-11 10:05:53.286431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.286448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.294766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef1ca0 00:27:43.735 [2024-12-11 10:05:53.295435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.295453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.735 [2024-12-11 10:05:53.303867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016edfdc0 00:27:43.735 [2024-12-11 10:05:53.304554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.735 [2024-12-11 10:05:53.304573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.995 [2024-12-11 10:05:53.313086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee5ec8 00:27:43.995 [2024-12-11 10:05:53.313769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.995 [2024-12-11 10:05:53.313787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.995 [2024-12-11 10:05:53.322139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efe720 00:27:43.995 [2024-12-11 10:05:53.322810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.995 [2024-12-11 10:05:53.322829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.995 [2024-12-11 10:05:53.331132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efd208 00:27:43.995 [2024-12-11 10:05:53.331796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.995 [2024-12-11 10:05:53.331814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.995 [2024-12-11 10:05:53.340289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eed4e8 00:27:43.995 [2024-12-11 10:05:53.340946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.995 [2024-12-11 10:05:53.340965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.995 [2024-12-11 10:05:53.349287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016efda78 00:27:43.995 [2024-12-11 10:05:53.349944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.995 [2024-12-11 10:05:53.349962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.995 [2024-12-11 10:05:53.358263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eeaab8 00:27:43.995 [2024-12-11 10:05:53.358937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.995 [2024-12-11 10:05:53.358955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.995 [2024-12-11 10:05:53.367256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ef8618 00:27:43.995 [2024-12-11 10:05:53.367910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.995 [2024-12-11 10:05:53.367928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.995 [2024-12-11 10:05:53.376299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee38d0 00:27:43.995 [2024-12-11 10:05:53.376955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.995 [2024-12-11 10:05:53.376973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.995 [2024-12-11 10:05:53.385298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016ee8d30 00:27:43.995 [2024-12-11 10:05:53.385958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.995 [2024-12-11 10:05:53.385975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.995 28070.00 IOPS, 109.65 MiB/s [2024-12-11T09:05:53.570Z] [2024-12-11 10:05:53.394423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaee30) with pdu=0x200016eeb760 00:27:43.995 [2024-12-11 10:05:53.395055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.995 [2024-12-11 10:05:53.395073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:43.995 00:27:43.995 Latency(us) 00:27:43.995 [2024-12-11T09:05:53.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.995 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:43.995 nvme0n1 : 2.01 28080.70 109.69 0.00 0.00 4551.94 2090.91 12233.39 00:27:43.995 [2024-12-11T09:05:53.570Z] =================================================================================================================== 00:27:43.995 [2024-12-11T09:05:53.570Z] Total : 28080.70 109.69 0.00 0.00 4551.94 2090.91 12233.39 00:27:43.995 { 00:27:43.995 "results": [ 00:27:43.995 { 00:27:43.995 "job": "nvme0n1", 00:27:43.995 "core_mask": "0x2", 00:27:43.995 "workload": "randwrite", 00:27:43.995 "status": "finished", 00:27:43.995 "queue_depth": 128, 00:27:43.995 "io_size": 4096, 00:27:43.995 "runtime": 2.006111, 00:27:43.995 "iops": 28080.699422913287, 00:27:43.995 "mibps": 109.69023212075503, 00:27:43.995 "io_failed": 0, 00:27:43.995 "io_timeout": 0, 00:27:43.995 "avg_latency_us": 4551.939364374938, 00:27:43.995 "min_latency_us": 2090.9104761904764, 00:27:43.995 "max_latency_us": 12233.386666666667 00:27:43.995 } 00:27:43.995 ], 00:27:43.995 "core_count": 1 00:27:43.995 } 00:27:43.995 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:43.995 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:43.995 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:43.995 | .driver_specific 00:27:43.995 | .nvme_error 00:27:43.995 | .status_code 00:27:43.995 | .command_transient_transport_error' 00:27:43.995 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:44.254 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 221 > 0 )) 00:27:44.254 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 237998 00:27:44.255 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 237998 ']' 00:27:44.255 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 237998 00:27:44.255 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:44.255 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:44.255 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 237998 00:27:44.255 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:44.255 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:44.255 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 237998' 00:27:44.255 killing process with pid 237998 00:27:44.255 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 237998 00:27:44.255 Received shutdown signal, test time was about 2.000000 seconds 00:27:44.255 00:27:44.255 Latency(us) 00:27:44.255 [2024-12-11T09:05:53.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.255 [2024-12-11T09:05:53.830Z] =================================================================================================================== 00:27:44.255 [2024-12-11T09:05:53.830Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:44.255 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 237998 00:27:44.514 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:44.514 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:44.514 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:44.514 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:44.514 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:44.514 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=238473 00:27:44.514 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 238473 /var/tmp/bperf.sock 00:27:44.514 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:44.514 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 238473 ']' 00:27:44.514 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:44.514 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:44.514 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:44.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:44.514 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:44.514 10:05:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:44.514 [2024-12-11 10:05:53.884310] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:27:44.514 [2024-12-11 10:05:53.884354] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid238473 ] 00:27:44.514 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:44.514 Zero copy mechanism will not be used. 00:27:44.514 [2024-12-11 10:05:53.963400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.514 [2024-12-11 10:05:53.999510] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.773 10:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:44.773 10:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:44.773 10:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:44.773 10:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:44.773 10:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:44.773 10:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.773 10:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:44.773 10:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.773 10:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:44.773 10:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:45.032 nvme0n1 00:27:45.292 10:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:45.292 10:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.292 10:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.292 10:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.292 10:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:45.292 10:05:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:45.292 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:45.292 Zero copy mechanism will not be used. 00:27:45.292 Running I/O for 2 seconds... 00:27:45.292 [2024-12-11 10:05:54.729122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.292 [2024-12-11 10:05:54.729237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.292 [2024-12-11 10:05:54.729273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.292 [2024-12-11 10:05:54.735326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.292 [2024-12-11 10:05:54.735487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.292 [2024-12-11 10:05:54.735512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.292 [2024-12-11 10:05:54.741391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.741481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.741501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.746491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.746550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.746569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.751496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.751558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.751577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.756807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.756877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.756896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.762115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.762205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.762231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.767334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.767408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.767426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.772373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.772442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.772460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.778072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.778180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.778198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.783646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.783718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.783736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.788501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.788627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.788646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.794046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.794191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.794209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.800241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.800344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.800364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.805369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.805478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.805496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.811677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.811806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.811824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.817086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.817137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.817154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.821621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.821674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.821692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.826058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.826109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.826126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.830448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.830513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.830531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.834879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.834937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.834955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.839426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.839477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.839495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.844182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.844238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.844256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.848614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.848674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.848693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.853151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.853209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.853233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.858052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.858116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.858133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.293 [2024-12-11 10:05:54.863276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.293 [2024-12-11 10:05:54.863344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.293 [2024-12-11 10:05:54.863365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.554 [2024-12-11 10:05:54.868425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.554 [2024-12-11 10:05:54.868480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.554 [2024-12-11 10:05:54.868498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.554 [2024-12-11 10:05:54.873634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.554 [2024-12-11 10:05:54.873692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.554 [2024-12-11 10:05:54.873709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.554 [2024-12-11 10:05:54.879341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.554 [2024-12-11 10:05:54.879393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.554 [2024-12-11 10:05:54.879411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.554 [2024-12-11 10:05:54.883962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.554 [2024-12-11 10:05:54.884022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.554 [2024-12-11 10:05:54.884040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.554 [2024-12-11 10:05:54.888408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.554 [2024-12-11 10:05:54.888477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.554 [2024-12-11 10:05:54.888495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.554 [2024-12-11 10:05:54.892738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.554 [2024-12-11 10:05:54.892793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.554 [2024-12-11 10:05:54.892811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.554 [2024-12-11 10:05:54.897194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.554 [2024-12-11 10:05:54.897271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.554 [2024-12-11 10:05:54.897289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.554 [2024-12-11 10:05:54.901803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.554 [2024-12-11 10:05:54.901864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.554 [2024-12-11 10:05:54.901882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.554 [2024-12-11 10:05:54.906302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.554 [2024-12-11 10:05:54.906375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.554 [2024-12-11 10:05:54.906393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.554 [2024-12-11 10:05:54.910872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.554 [2024-12-11 10:05:54.910947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.554 [2024-12-11 10:05:54.910965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.554 [2024-12-11 10:05:54.915463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.554 [2024-12-11 10:05:54.915517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.554 [2024-12-11 10:05:54.915535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.554 [2024-12-11 10:05:54.920049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.554 [2024-12-11 10:05:54.920104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.554 [2024-12-11 10:05:54.920122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.554 [2024-12-11 10:05:54.924675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.554 [2024-12-11 10:05:54.924725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.554 [2024-12-11 10:05:54.924742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.554 [2024-12-11 10:05:54.929210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.554 [2024-12-11 10:05:54.929279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.554 [2024-12-11 10:05:54.929297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.554 [2024-12-11 10:05:54.933726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.554 [2024-12-11 10:05:54.933776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:54.933793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:54.938234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:54.938290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:54.938308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:54.942770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:54.942826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:54.942843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:54.947275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:54.947327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:54.947344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:54.951868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:54.951919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:54.951936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:54.956400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:54.956466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:54.956483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:54.960950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:54.961021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:54.961039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:54.965447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:54.965503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:54.965521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:54.969851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:54.969903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:54.969921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:54.974375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:54.974427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:54.974444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:54.979145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:54.979210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:54.979233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:54.984320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:54.984381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:54.984403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:54.989587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:54.989655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:54.989674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:54.994453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:54.994501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:54.994519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:54.999117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:54.999176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:54.999194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:55.004138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:55.004261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:55.004279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:55.009933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:55.010009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:55.010027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:55.014887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:55.014939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:55.014956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:55.019395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:55.019467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:55.019485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:55.023957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:55.024011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:55.024029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:55.028321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:55.028386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:55.028404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:55.032797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:55.032848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:55.032866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:55.037308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:55.037358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:55.037376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:55.041904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:55.041954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:55.041972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:55.046461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:55.046520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:55.046537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:55.050924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:55.050982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:55.051000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:55.055331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:55.055407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:55.055425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:55.059948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.555 [2024-12-11 10:05:55.059999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.555 [2024-12-11 10:05:55.060016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.555 [2024-12-11 10:05:55.065065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.556 [2024-12-11 10:05:55.065121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.556 [2024-12-11 10:05:55.065139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.556 [2024-12-11 10:05:55.070068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.556 [2024-12-11 10:05:55.070122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.556 [2024-12-11 10:05:55.070140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.556 [2024-12-11 10:05:55.075174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.556 [2024-12-11 10:05:55.075242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.556 [2024-12-11 10:05:55.075260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.556 [2024-12-11 10:05:55.079834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.556 [2024-12-11 10:05:55.079901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.556 [2024-12-11 10:05:55.079919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.556 [2024-12-11 10:05:55.084740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.556 [2024-12-11 10:05:55.084874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.556 [2024-12-11 10:05:55.084891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.556 [2024-12-11 10:05:55.090310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.556 [2024-12-11 10:05:55.090378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.556 [2024-12-11 10:05:55.090397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.556 [2024-12-11 10:05:55.094844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.556 [2024-12-11 10:05:55.094921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.556 [2024-12-11 10:05:55.094938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.556 [2024-12-11 10:05:55.099378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.556 [2024-12-11 10:05:55.099439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.556 [2024-12-11 10:05:55.099457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.556 [2024-12-11 10:05:55.103808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.556 [2024-12-11 10:05:55.103888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.556 [2024-12-11 10:05:55.103905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.556 [2024-12-11 10:05:55.108251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.556 [2024-12-11 10:05:55.108305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.556 [2024-12-11 10:05:55.108326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.556 [2024-12-11 10:05:55.112556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.556 [2024-12-11 10:05:55.112618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.556 [2024-12-11 10:05:55.112635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.556 [2024-12-11 10:05:55.116896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.556 [2024-12-11 10:05:55.116966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.556 [2024-12-11 10:05:55.116984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.556 [2024-12-11 10:05:55.121297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.556 [2024-12-11 10:05:55.121351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.556 [2024-12-11 10:05:55.121369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.556 [2024-12-11 10:05:55.125694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.556 [2024-12-11 10:05:55.125754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.556 [2024-12-11 10:05:55.125772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.130184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.130237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.130255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.135065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.135122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.135140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.139452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.139507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.139525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.143754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.143810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.143828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.148038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.148121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.148139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.152365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.152420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.152437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.156658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.156716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.156733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.160966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.161028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.161045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.165250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.165301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.165319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.169529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.169595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.169614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.173824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.173880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.173898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.178079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.178135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.178153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.182324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.182377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.182394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.186538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.186595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.186613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.191209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.191283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.191301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.195905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.195955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.195973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.200236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.200300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.200318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.204480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.204553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.204570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.208721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.208773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.208790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.212978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.213039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.213056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.217243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.217303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.217320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.221558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.221613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.221634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.225788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.225853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.225870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.230042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.230103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.230121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.234416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.234519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.234536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.238673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.238738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.238756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.242968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.243022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.243040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.247313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.247372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.247389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.251649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.816 [2024-12-11 10:05:55.251720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.816 [2024-12-11 10:05:55.251739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.816 [2024-12-11 10:05:55.255951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.256004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.256022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.260284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.260352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.260374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.264605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.264666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.264684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.268848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.268915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.268933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.273085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.273156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.273173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.277342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.277415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.277432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.281579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.281648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.281667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.285842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.285903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.285921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.290072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.290120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.290138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.294280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.294361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.294379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.298499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.298559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.298576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.303050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.303105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.303123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.307485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.307555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.307573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.311969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.312024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.312042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.316384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.316447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.316464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.320687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.320749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.320766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.324948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.325031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.325049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.329361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.329423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.329441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.333880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.333950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.333968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.338192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.338257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.338274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.342523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.342577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.342594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.346843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.346902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.346920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.351130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.351191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.351209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.355483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.355546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.355564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.359756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.359816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.359833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.364078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.364161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.364179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.368398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.368467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.817 [2024-12-11 10:05:55.368485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:45.817 [2024-12-11 10:05:55.372646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.817 [2024-12-11 10:05:55.372731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.818 [2024-12-11 10:05:55.372752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:45.818 [2024-12-11 10:05:55.376930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.818 [2024-12-11 10:05:55.376981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.818 [2024-12-11 10:05:55.376998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:45.818 [2024-12-11 10:05:55.381601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.818 [2024-12-11 10:05:55.381659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.818 [2024-12-11 10:05:55.381677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:45.818 [2024-12-11 10:05:55.385981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:45.818 [2024-12-11 10:05:55.386041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.818 [2024-12-11 10:05:55.386059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.390368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.390425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.390443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.394738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.394806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.394824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.399048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.399116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.399134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.403722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.403777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.403794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.408016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.408070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.408087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.412342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.412402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.412420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.416648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.416713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.416730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.421171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.421232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.421250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.425878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.425952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.425970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.430248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.430311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.430328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.434622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.434684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.434701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.439097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.439165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.439182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.443582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.443642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.443660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.448053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.448108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.448126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.452351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.452405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.452422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.456840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.456907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.456925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.461195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.461266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.461284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.465460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.465524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.465541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.469897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.469949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.469967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.474316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.474370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.474388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.478601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.478680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.478698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.482866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.482962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.482980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.487252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.487310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.487332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.491633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.491693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.491712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.495952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.496011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.078 [2024-12-11 10:05:55.496031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.078 [2024-12-11 10:05:55.500317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.078 [2024-12-11 10:05:55.500372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.500391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.504702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.504760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.504778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.509015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.509090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.509108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.513362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.513437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.513456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.517706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.517781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.517799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.522030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.522099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.522118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.526398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.526464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.526483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.531077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.531144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.531162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.535813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.535885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.535904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.540515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.540616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.540634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.545163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.545230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.545247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.549830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.549883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.549902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.554425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.554478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.554496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.559372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.559422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.559439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.564356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.564418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.564435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.569086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.569148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.569166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.573724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.573794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.573812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.578324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.578375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.578394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.582856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.582951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.582969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.587408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.587459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.587477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.592171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.592311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.592329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.597045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.597108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.597126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.601660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.601709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.601727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.606286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.606338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.606359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.610979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.611040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.611058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.615657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.615710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.615727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.620285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.620338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.620355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.624822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.079 [2024-12-11 10:05:55.624883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.079 [2024-12-11 10:05:55.624900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.079 [2024-12-11 10:05:55.629394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.080 [2024-12-11 10:05:55.629457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.080 [2024-12-11 10:05:55.629475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.080 [2024-12-11 10:05:55.634143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.080 [2024-12-11 10:05:55.634199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.080 [2024-12-11 10:05:55.634223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.080 [2024-12-11 10:05:55.639404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.080 [2024-12-11 10:05:55.639474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.080 [2024-12-11 10:05:55.639493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.080 [2024-12-11 10:05:55.644439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.080 [2024-12-11 10:05:55.644494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.080 [2024-12-11 10:05:55.644512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.080 [2024-12-11 10:05:55.650328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.080 [2024-12-11 10:05:55.650389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.080 [2024-12-11 10:05:55.650408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.340 [2024-12-11 10:05:55.655673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.340 [2024-12-11 10:05:55.655726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.340 [2024-12-11 10:05:55.655744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.340 [2024-12-11 10:05:55.660722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.340 [2024-12-11 10:05:55.660777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.340 [2024-12-11 10:05:55.660795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.340 [2024-12-11 10:05:55.665513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.340 [2024-12-11 10:05:55.665576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.340 [2024-12-11 10:05:55.665594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.340 [2024-12-11 10:05:55.670268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.340 [2024-12-11 10:05:55.670329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.340 [2024-12-11 10:05:55.670347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.340 [2024-12-11 10:05:55.675206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.340 [2024-12-11 10:05:55.675287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.340 [2024-12-11 10:05:55.675305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.340 [2024-12-11 10:05:55.680089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.340 [2024-12-11 10:05:55.680154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.340 [2024-12-11 10:05:55.680172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.340 [2024-12-11 10:05:55.684780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.340 [2024-12-11 10:05:55.684833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.340 [2024-12-11 10:05:55.684851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.340 [2024-12-11 10:05:55.689202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.340 [2024-12-11 10:05:55.689264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.340 [2024-12-11 10:05:55.689282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.340 [2024-12-11 10:05:55.693866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.340 [2024-12-11 10:05:55.693919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.340 [2024-12-11 10:05:55.693937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.699086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.699148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.699166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.704073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.704127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.704145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.708598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.708686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.708704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.713040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.713094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.713112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.717259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.717321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.717338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.721512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.721611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.721628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.341 6696.00 IOPS, 837.00 MiB/s [2024-12-11T09:05:55.916Z] [2024-12-11 10:05:55.726834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.726898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.726916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.731055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.731114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.731139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.735183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.735240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.735258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.739374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.739436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.739454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.743560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.743620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.743638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.747725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.747778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.747796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.751946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.752018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.752037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.756132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.756201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.756226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.760261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.760328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.760347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.764389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.764460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.764479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.768533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.768612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.768631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.772682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.772756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.772775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.776850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.776928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.776946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.780992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.781062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.781081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.785122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.785176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.785193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.789239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.789311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.789330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.793409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.793468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.793486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.797468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.797520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.797538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.801700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.801758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.801776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.805825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.805878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.805896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.810003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.810063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-12-11 10:05:55.810080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.341 [2024-12-11 10:05:55.814188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.341 [2024-12-11 10:05:55.814258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.814276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.818378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.818434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.818452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.822544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.822658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.822676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.826957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.827046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.827064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.831108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.831191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.831209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.835430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.835487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.835504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.839610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.839663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.839684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.843788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.843850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.843868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.847916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.847971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.847988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.851985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.852048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.852067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.856126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.856197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.856215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.860261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.860372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.860389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.864396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.864457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.864475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.868522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.868574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.868592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.872627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.872688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.872705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.876917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.876976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.876994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.881015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.881075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.881092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.885295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.885356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.885374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.889593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.889661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.889680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.893738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.893802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.893821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.897915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.898000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.898019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.902054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.902107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.902125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.906190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.906314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.906331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.342 [2024-12-11 10:05:55.910324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.342 [2024-12-11 10:05:55.910375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-12-11 10:05:55.910393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:55.914516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:55.914567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:55.914585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:55.918688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:55.918742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:55.918761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:55.922843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:55.922902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:55.922920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:55.926989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:55.927066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:55.927084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:55.931236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:55.931287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:55.931305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:55.935696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:55.935749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:55.935766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:55.940490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:55.940547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:55.940564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:55.945537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:55.945598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:55.945615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:55.950353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:55.950420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:55.950447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:55.954963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:55.955027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:55.955044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:55.960243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:55.960300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:55.960317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:55.965546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:55.965601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:55.965619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:55.970396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:55.970449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:55.970467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:55.975123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:55.975181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:55.975199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:55.980005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:55.980056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:55.980074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:55.985027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:55.985079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:55.985097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:55.990228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:55.990301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:55.990319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:55.995081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:55.995137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:55.995155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:56.000563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:56.000629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:56.000646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.603 [2024-12-11 10:05:56.005389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.603 [2024-12-11 10:05:56.005450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-12-11 10:05:56.005469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.010243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.010320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.010338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.015285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.015362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.015380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.020167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.020245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.020263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.025400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.025470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.025488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.030936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.030990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.031008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.036107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.036158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.036175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.040883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.040972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.040989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.046090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.046139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.046157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.050901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.050953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.050971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.056226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.056304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.056322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.061389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.061468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.061485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.066969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.067028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.067045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.072846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.072898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.072916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.078149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.078199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.078223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.083434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.083486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.083507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.088397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.088456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.088475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.093332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.093432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.093450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.098496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.098550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.098569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.103537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.103593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.103611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.108806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.108877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.108906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.113885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.113939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.113956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.118543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.118600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.118618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.123388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.123441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.123459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.127777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.127833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.127854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.132126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.132190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.132208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.136776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.136829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.136847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.141734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.141816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.141834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.604 [2024-12-11 10:05:56.146442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.604 [2024-12-11 10:05:56.146512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-12-11 10:05:56.146529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.605 [2024-12-11 10:05:56.151163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.605 [2024-12-11 10:05:56.151222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-12-11 10:05:56.151241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.605 [2024-12-11 10:05:56.155563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.605 [2024-12-11 10:05:56.155623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-12-11 10:05:56.155640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.605 [2024-12-11 10:05:56.160124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.605 [2024-12-11 10:05:56.160184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-12-11 10:05:56.160202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.605 [2024-12-11 10:05:56.164737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.605 [2024-12-11 10:05:56.164805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-12-11 10:05:56.164824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.605 [2024-12-11 10:05:56.169718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.605 [2024-12-11 10:05:56.169785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-12-11 10:05:56.169803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.605 [2024-12-11 10:05:56.175540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.605 [2024-12-11 10:05:56.175594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-12-11 10:05:56.175611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.865 [2024-12-11 10:05:56.180405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.865 [2024-12-11 10:05:56.180502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.865 [2024-12-11 10:05:56.180519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.865 [2024-12-11 10:05:56.185310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.865 [2024-12-11 10:05:56.185381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.865 [2024-12-11 10:05:56.185399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.865 [2024-12-11 10:05:56.190043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.865 [2024-12-11 10:05:56.190099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.865 [2024-12-11 10:05:56.190117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.865 [2024-12-11 10:05:56.194566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.865 [2024-12-11 10:05:56.194641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.865 [2024-12-11 10:05:56.194659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.865 [2024-12-11 10:05:56.199081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.865 [2024-12-11 10:05:56.199131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.865 [2024-12-11 10:05:56.199148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.865 [2024-12-11 10:05:56.203856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.865 [2024-12-11 10:05:56.203915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.865 [2024-12-11 10:05:56.203932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.865 [2024-12-11 10:05:56.208330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.865 [2024-12-11 10:05:56.208406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.865 [2024-12-11 10:05:56.208424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.865 [2024-12-11 10:05:56.212857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.865 [2024-12-11 10:05:56.212910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.865 [2024-12-11 10:05:56.212929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.865 [2024-12-11 10:05:56.218300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.865 [2024-12-11 10:05:56.218421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.865 [2024-12-11 10:05:56.218439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.865 [2024-12-11 10:05:56.225123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.865 [2024-12-11 10:05:56.225289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.865 [2024-12-11 10:05:56.225307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.865 [2024-12-11 10:05:56.231228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.865 [2024-12-11 10:05:56.231314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.865 [2024-12-11 10:05:56.231332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.865 [2024-12-11 10:05:56.237716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.865 [2024-12-11 10:05:56.237766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.865 [2024-12-11 10:05:56.237785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.244355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.244448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.244467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.251949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.252102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.252120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.258435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.258571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.258589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.265332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.265424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.265446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.272766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.272838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.272856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.279771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.279938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.279956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.287250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.287432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.287449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.294273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.294354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.294372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.299851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.299949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.299966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.305338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.305452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.305471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.310314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.310378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.310397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.315459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.315575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.315592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.321822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.321927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.321945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.328239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.328422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.328441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.334415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.334566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.334584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.341041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.341224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.341242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.347260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.347385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.347403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.353463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.353624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.353642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.359696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.359841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.359859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.365968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.366117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.366135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.372558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.372690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.372708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.378683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.378828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.378846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.385032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.385153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.385171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.390078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.390137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.390156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.394396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.394466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.394484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.398901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.398963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.398980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.403324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.403376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-12-11 10:05:56.403394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.866 [2024-12-11 10:05:56.408462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.866 [2024-12-11 10:05:56.408545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-12-11 10:05:56.408563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.867 [2024-12-11 10:05:56.413592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.867 [2024-12-11 10:05:56.413642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-12-11 10:05:56.413660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:46.867 [2024-12-11 10:05:56.418994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.867 [2024-12-11 10:05:56.419049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-12-11 10:05:56.419071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:46.867 [2024-12-11 10:05:56.423981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.867 [2024-12-11 10:05:56.424032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-12-11 10:05:56.424049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:46.867 [2024-12-11 10:05:56.429165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.867 [2024-12-11 10:05:56.429229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-12-11 10:05:56.429246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:46.867 [2024-12-11 10:05:56.434051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:46.867 [2024-12-11 10:05:56.434113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-12-11 10:05:56.434132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:47.127 [2024-12-11 10:05:56.439072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.127 [2024-12-11 10:05:56.439134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.127 [2024-12-11 10:05:56.439153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:47.127 [2024-12-11 10:05:56.444162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.127 [2024-12-11 10:05:56.444224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.127 [2024-12-11 10:05:56.444243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:47.127 [2024-12-11 10:05:56.449558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.127 [2024-12-11 10:05:56.449611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.127 [2024-12-11 10:05:56.449628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:47.127 [2024-12-11 10:05:56.454528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.127 [2024-12-11 10:05:56.454579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.127 [2024-12-11 10:05:56.454597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:47.127 [2024-12-11 10:05:56.459980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.127 [2024-12-11 10:05:56.460050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.127 [2024-12-11 10:05:56.460068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:47.127 [2024-12-11 10:05:56.464853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.127 [2024-12-11 10:05:56.464954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.127 [2024-12-11 10:05:56.464972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:47.127 [2024-12-11 10:05:56.469974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.127 [2024-12-11 10:05:56.470033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.470050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.474974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.475028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.475046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.479746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.479810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.479828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.484832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.484892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.484910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.489821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.489884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.489901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.495225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.495287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.495305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.500289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.500342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.500360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.505354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.505486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.505503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.510304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.510373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.510391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.515427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.515503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.515522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.520630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.520705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.520722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.525742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.525800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.525818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.530380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.530442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.530459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.535166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.535232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.535249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.540184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.540244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.540262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.544579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.544638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.544656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.549031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.549112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.549133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.553897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.553946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.553964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.558573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.558625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.558642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.563051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.563105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.563123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.567387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.567447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.567465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.571679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.571739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.571756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.575817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.575879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.575897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.580050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.580110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.580127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.584426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.584542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.584560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.588833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.588883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.588900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.592992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.593063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.593081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.597356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.128 [2024-12-11 10:05:56.597414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-12-11 10:05:56.597432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:47.128 [2024-12-11 10:05:56.601967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.602039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.602057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.607297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.607348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.607365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.611870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.611923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.611940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.616106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.616176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.616193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.620360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.620427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.620445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.624696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.624754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.624779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.628975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.629028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.629046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.633042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.633097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.633114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.637080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.637143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.637161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.641015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.641073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.641091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.644978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.645037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.645054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.648990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.649043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.649061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.652964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.653018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.653035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.656973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.657048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.657066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.661098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.661155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.661172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.665061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.665121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.665139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.669014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.669070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.669088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.672976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.673035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.673053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.676928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.676984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.677002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.680849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.680909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.680928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.684827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.684880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.684898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.688785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.688837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.688855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.692731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.692797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.692815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:47.129 [2024-12-11 10:05:56.696775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.129 [2024-12-11 10:05:56.696839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-12-11 10:05:56.696858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:47.389 [2024-12-11 10:05:56.701084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.389 [2024-12-11 10:05:56.701166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-12-11 10:05:56.701184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:47.389 [2024-12-11 10:05:56.706077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.389 [2024-12-11 10:05:56.706182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-12-11 10:05:56.706200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:47.389 [2024-12-11 10:05:56.711255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.389 [2024-12-11 10:05:56.711347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-12-11 10:05:56.711365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:47.389 [2024-12-11 10:05:56.715697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.389 [2024-12-11 10:05:56.715750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-12-11 10:05:56.715768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:47.389 [2024-12-11 10:05:56.720021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.389 [2024-12-11 10:05:56.720103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-12-11 10:05:56.720121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:47.389 [2024-12-11 10:05:56.724303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaf310) with pdu=0x200016efef90 00:27:47.389 [2024-12-11 10:05:56.724356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.389 [2024-12-11 10:05:56.724374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:47.389 6572.00 IOPS, 821.50 MiB/s 00:27:47.389 Latency(us) 00:27:47.389 [2024-12-11T09:05:56.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.389 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:47.389 nvme0n1 : 2.00 6569.42 821.18 0.00 0.00 2431.45 1786.64 7552.24 00:27:47.389 [2024-12-11T09:05:56.964Z] =================================================================================================================== 00:27:47.389 [2024-12-11T09:05:56.964Z] Total : 6569.42 821.18 0.00 0.00 2431.45 1786.64 7552.24 00:27:47.389 { 00:27:47.389 "results": [ 00:27:47.389 { 00:27:47.389 "job": "nvme0n1", 00:27:47.389 "core_mask": "0x2", 00:27:47.389 "workload": "randwrite", 00:27:47.389 "status": "finished", 00:27:47.389 "queue_depth": 16, 00:27:47.389 "io_size": 131072, 00:27:47.389 "runtime": 2.003374, 00:27:47.389 "iops": 6569.417392858248, 00:27:47.389 "mibps": 821.177174107281, 00:27:47.389 "io_failed": 0, 00:27:47.389 "io_timeout": 0, 00:27:47.389 "avg_latency_us": 2431.454638053991, 00:27:47.389 "min_latency_us": 1786.6361904761904, 00:27:47.389 "max_latency_us": 7552.243809523809 00:27:47.389 } 00:27:47.389 ], 00:27:47.389 "core_count": 1 00:27:47.389 } 00:27:47.389 10:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:47.389 10:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:47.389 10:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:47.389 | .driver_specific 00:27:47.389 | .nvme_error 00:27:47.389 | .status_code 00:27:47.389 | .command_transient_transport_error' 00:27:47.389 10:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:47.389 10:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 424 > 0 )) 00:27:47.389 10:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 238473 00:27:47.389 10:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 238473 ']' 00:27:47.389 10:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 238473 00:27:47.389 10:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:47.389 10:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.389 10:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 238473 00:27:47.648 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:47.648 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:47.648 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 238473' 00:27:47.648 killing process with pid 238473 00:27:47.648 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 238473 00:27:47.648 Received shutdown signal, test time was about 2.000000 seconds 00:27:47.648 00:27:47.648 Latency(us) 00:27:47.648 [2024-12-11T09:05:57.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.648 [2024-12-11T09:05:57.223Z] =================================================================================================================== 00:27:47.648 [2024-12-11T09:05:57.223Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:47.648 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 238473 00:27:47.648 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 236830 00:27:47.648 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 236830 ']' 00:27:47.649 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 236830 00:27:47.649 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:47.649 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.649 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 236830 00:27:47.649 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:47.649 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:47.649 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 236830' 00:27:47.649 killing process with pid 236830 00:27:47.649 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 236830 00:27:47.908 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 236830 00:27:47.908 00:27:47.908 real 0m13.843s 00:27:47.908 user 0m26.492s 00:27:47.908 sys 0m4.569s 00:27:47.908 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:47.908 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:47.908 ************************************ 00:27:47.908 END TEST nvmf_digest_error 00:27:47.908 ************************************ 00:27:47.908 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:47.908 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:47.908 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:47.908 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:47.908 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:47.908 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:47.908 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:47.908 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:47.908 rmmod nvme_tcp 00:27:47.908 rmmod nvme_fabrics 00:27:47.908 rmmod nvme_keyring 00:27:47.908 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:48.167 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:48.167 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:48.167 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 236830 ']' 00:27:48.167 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 236830 00:27:48.167 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 236830 ']' 00:27:48.167 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 236830 00:27:48.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (236830) - No such process 00:27:48.167 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 236830 is not found' 00:27:48.167 Process with pid 236830 is not found 00:27:48.168 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:48.168 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:48.168 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:48.168 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:48.168 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:48.168 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:48.168 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:48.168 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:48.168 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:48.168 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.168 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.168 10:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.073 10:05:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:50.073 00:27:50.073 real 0m37.681s 00:27:50.073 user 0m56.117s 00:27:50.073 sys 0m14.307s 00:27:50.073 10:05:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:50.073 10:05:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:50.073 ************************************ 00:27:50.073 END TEST nvmf_digest 00:27:50.073 ************************************ 00:27:50.073 10:05:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:50.073 10:05:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:50.073 10:05:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:50.073 10:05:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:50.073 10:05:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:50.073 10:05:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:50.073 10:05:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.073 ************************************ 00:27:50.073 START TEST nvmf_bdevperf 00:27:50.073 ************************************ 00:27:50.073 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:50.333 * Looking for test storage... 00:27:50.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:50.333 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:50.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.333 --rc genhtml_branch_coverage=1 00:27:50.333 --rc genhtml_function_coverage=1 00:27:50.334 --rc genhtml_legend=1 00:27:50.334 --rc geninfo_all_blocks=1 00:27:50.334 --rc geninfo_unexecuted_blocks=1 00:27:50.334 00:27:50.334 ' 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:50.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.334 --rc genhtml_branch_coverage=1 00:27:50.334 --rc genhtml_function_coverage=1 00:27:50.334 --rc genhtml_legend=1 00:27:50.334 --rc geninfo_all_blocks=1 00:27:50.334 --rc geninfo_unexecuted_blocks=1 00:27:50.334 00:27:50.334 ' 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:50.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.334 --rc genhtml_branch_coverage=1 00:27:50.334 --rc genhtml_function_coverage=1 00:27:50.334 --rc genhtml_legend=1 00:27:50.334 --rc geninfo_all_blocks=1 00:27:50.334 --rc geninfo_unexecuted_blocks=1 00:27:50.334 00:27:50.334 ' 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:50.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.334 --rc genhtml_branch_coverage=1 00:27:50.334 --rc genhtml_function_coverage=1 00:27:50.334 --rc genhtml_legend=1 00:27:50.334 --rc geninfo_all_blocks=1 00:27:50.334 --rc geninfo_unexecuted_blocks=1 00:27:50.334 00:27:50.334 ' 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:50.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:50.334 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:56.902 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:56.903 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:56.903 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:56.903 Found net devices under 0000:af:00.0: cvl_0_0 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:56.903 Found net devices under 0000:af:00.1: cvl_0_1 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:56.903 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:57.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:57.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:27:57.162 00:27:57.162 --- 10.0.0.2 ping statistics --- 00:27:57.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.162 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:57.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:57.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:27:57.162 00:27:57.162 --- 10.0.0.1 ping statistics --- 00:27:57.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.162 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:57.162 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.163 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=243072 00:27:57.163 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:57.163 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 243072 00:27:57.163 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 243072 ']' 00:27:57.163 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.163 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:57.163 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.163 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:57.163 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.163 [2024-12-11 10:06:06.635713] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:27:57.163 [2024-12-11 10:06:06.635764] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.163 [2024-12-11 10:06:06.723568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:57.422 [2024-12-11 10:06:06.765409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:57.422 [2024-12-11 10:06:06.765441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:57.422 [2024-12-11 10:06:06.765448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:57.422 [2024-12-11 10:06:06.765454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:57.422 [2024-12-11 10:06:06.765459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:57.422 [2024-12-11 10:06:06.766888] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:57.422 [2024-12-11 10:06:06.767014] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.422 [2024-12-11 10:06:06.767016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.422 [2024-12-11 10:06:06.915722] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.422 Malloc0 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.422 [2024-12-11 10:06:06.984892] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:57.422 { 00:27:57.422 "params": { 00:27:57.422 "name": "Nvme$subsystem", 00:27:57.422 "trtype": "$TEST_TRANSPORT", 00:27:57.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.422 "adrfam": "ipv4", 00:27:57.422 "trsvcid": "$NVMF_PORT", 00:27:57.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.422 "hdgst": ${hdgst:-false}, 00:27:57.422 "ddgst": ${ddgst:-false} 00:27:57.422 }, 00:27:57.422 "method": "bdev_nvme_attach_controller" 00:27:57.422 } 00:27:57.422 EOF 00:27:57.422 )") 00:27:57.422 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:57.682 10:06:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:57.682 10:06:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:57.682 10:06:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:57.682 "params": { 00:27:57.682 "name": "Nvme1", 00:27:57.682 "trtype": "tcp", 00:27:57.682 "traddr": "10.0.0.2", 00:27:57.682 "adrfam": "ipv4", 00:27:57.682 "trsvcid": "4420", 00:27:57.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:57.682 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:57.682 "hdgst": false, 00:27:57.682 "ddgst": false 00:27:57.682 }, 00:27:57.682 "method": "bdev_nvme_attach_controller" 00:27:57.682 }' 00:27:57.682 [2024-12-11 10:06:07.034854] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:27:57.682 [2024-12-11 10:06:07.034901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243100 ] 00:27:57.682 [2024-12-11 10:06:07.117805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.682 [2024-12-11 10:06:07.157873] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.941 Running I/O for 1 seconds... 00:27:58.877 11242.00 IOPS, 43.91 MiB/s 00:27:58.877 Latency(us) 00:27:58.877 [2024-12-11T09:06:08.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.877 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:58.877 Verification LBA range: start 0x0 length 0x4000 00:27:58.877 Nvme1n1 : 1.00 11325.65 44.24 0.00 0.00 11257.57 2418.59 12857.54 00:27:58.877 [2024-12-11T09:06:08.452Z] =================================================================================================================== 00:27:58.877 [2024-12-11T09:06:08.452Z] Total : 11325.65 44.24 0.00 0.00 11257.57 2418.59 12857.54 00:27:59.136 10:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=243487 00:27:59.136 10:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:59.136 10:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:59.136 10:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:59.136 10:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:59.136 10:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:59.136 10:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:59.136 10:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:59.136 { 00:27:59.136 "params": { 00:27:59.136 "name": "Nvme$subsystem", 00:27:59.136 "trtype": "$TEST_TRANSPORT", 00:27:59.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.136 "adrfam": "ipv4", 00:27:59.136 "trsvcid": "$NVMF_PORT", 00:27:59.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.136 "hdgst": ${hdgst:-false}, 00:27:59.136 "ddgst": ${ddgst:-false} 00:27:59.136 }, 00:27:59.136 "method": "bdev_nvme_attach_controller" 00:27:59.136 } 00:27:59.136 EOF 00:27:59.136 )") 00:27:59.136 10:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:59.136 10:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:59.136 10:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:59.136 10:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:59.136 "params": { 00:27:59.136 "name": "Nvme1", 00:27:59.136 "trtype": "tcp", 00:27:59.136 "traddr": "10.0.0.2", 00:27:59.136 "adrfam": "ipv4", 00:27:59.136 "trsvcid": "4420", 00:27:59.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:59.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:59.136 "hdgst": false, 00:27:59.136 "ddgst": false 00:27:59.136 }, 00:27:59.136 "method": "bdev_nvme_attach_controller" 00:27:59.136 }' 00:27:59.136 [2024-12-11 10:06:08.527255] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:27:59.136 [2024-12-11 10:06:08.527303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243487 ] 00:27:59.136 [2024-12-11 10:06:08.605744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.136 [2024-12-11 10:06:08.645233] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.395 Running I/O for 15 seconds... 00:28:01.707 11345.00 IOPS, 44.32 MiB/s [2024-12-11T09:06:11.543Z] 11434.00 IOPS, 44.66 MiB/s [2024-12-11T09:06:11.543Z] 10:06:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 243072 00:28:01.968 10:06:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:01.968 [2024-12-11 10:06:11.494322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.968 [2024-12-11 10:06:11.494364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.968 [2024-12-11 10:06:11.494384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.968 [2024-12-11 10:06:11.494393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.968 [2024-12-11 10:06:11.494402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.968 [2024-12-11 10:06:11.494411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.968 [2024-12-11 10:06:11.494419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.968 [2024-12-11 10:06:11.494425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.968 [2024-12-11 10:06:11.494434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.968 [2024-12-11 10:06:11.494440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.968 [2024-12-11 10:06:11.494456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.968 [2024-12-11 10:06:11.494463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.968 [2024-12-11 10:06:11.494473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.968 [2024-12-11 10:06:11.494481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.968 [2024-12-11 10:06:11.494489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.968 [2024-12-11 10:06:11.494498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.968 [2024-12-11 10:06:11.494506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.494986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.494993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.495001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.495007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.495015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.495022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.495030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.495036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.495044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.495050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.495060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.495067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.495075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.495082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.495090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.495096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.495104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.969 [2024-12-11 10:06:11.495110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.969 [2024-12-11 10:06:11.495119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.970 [2024-12-11 10:06:11.495789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.970 [2024-12-11 10:06:11.495795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.495803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.495810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.495818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.495824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.495832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.495838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.495846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.495852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.495861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.971 [2024-12-11 10:06:11.495868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.495876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.495882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.495890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.495897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.495904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.495912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.495920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.495927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.495935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.495941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.495949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.495956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.495966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.495972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.495981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.495987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.495995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.496001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.496016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.496029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.496044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.496058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.496073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.496087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.496103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.496117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.496131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.496145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.496159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.496174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:01.971 [2024-12-11 10:06:11.496188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.971 [2024-12-11 10:06:11.496204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.971 [2024-12-11 10:06:11.496223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.971 [2024-12-11 10:06:11.496238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.971 [2024-12-11 10:06:11.496253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.971 [2024-12-11 10:06:11.496267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.971 [2024-12-11 10:06:11.496284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.971 [2024-12-11 10:06:11.496299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.971 [2024-12-11 10:06:11.496313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.971 [2024-12-11 10:06:11.496328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.971 [2024-12-11 10:06:11.496342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.971 [2024-12-11 10:06:11.496356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.971 [2024-12-11 10:06:11.496371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.971 [2024-12-11 10:06:11.496379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.971 [2024-12-11 10:06:11.496386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.972 [2024-12-11 10:06:11.496394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.972 [2024-12-11 10:06:11.496400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.972 [2024-12-11 10:06:11.496408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17358d0 is same with the state(6) to be set 00:28:01.972 [2024-12-11 10:06:11.496416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:01.972 [2024-12-11 10:06:11.496422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:01.972 [2024-12-11 10:06:11.496428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103256 len:8 PRP1 0x0 PRP2 0x0 00:28:01.972 [2024-12-11 10:06:11.496436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.972 [2024-12-11 10:06:11.499227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.972 [2024-12-11 10:06:11.499280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:01.972 [2024-12-11 10:06:11.499765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.972 [2024-12-11 10:06:11.499781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:01.972 [2024-12-11 10:06:11.499794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:01.972 [2024-12-11 10:06:11.499972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:01.972 [2024-12-11 10:06:11.500145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.972 [2024-12-11 10:06:11.500153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.972 [2024-12-11 10:06:11.500161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.972 [2024-12-11 10:06:11.500169] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.972 [2024-12-11 10:06:11.512553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.972 [2024-12-11 10:06:11.512937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.972 [2024-12-11 10:06:11.512956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:01.972 [2024-12-11 10:06:11.512964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:01.972 [2024-12-11 10:06:11.513137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:01.972 [2024-12-11 10:06:11.513318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.972 [2024-12-11 10:06:11.513327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.972 [2024-12-11 10:06:11.513334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.972 [2024-12-11 10:06:11.513341] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.972 [2024-12-11 10:06:11.525486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.972 [2024-12-11 10:06:11.525859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.972 [2024-12-11 10:06:11.525876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:01.972 [2024-12-11 10:06:11.525884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:01.972 [2024-12-11 10:06:11.526052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:01.972 [2024-12-11 10:06:11.526227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.972 [2024-12-11 10:06:11.526236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.972 [2024-12-11 10:06:11.526243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.972 [2024-12-11 10:06:11.526250] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:01.972 [2024-12-11 10:06:11.538546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:01.972 [2024-12-11 10:06:11.539001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.972 [2024-12-11 10:06:11.539018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:01.972 [2024-12-11 10:06:11.539026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:01.972 [2024-12-11 10:06:11.539200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:01.972 [2024-12-11 10:06:11.539379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:01.972 [2024-12-11 10:06:11.539391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:01.972 [2024-12-11 10:06:11.539398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:01.972 [2024-12-11 10:06:11.539405] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.232 [2024-12-11 10:06:11.551540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.232 [2024-12-11 10:06:11.551931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.232 [2024-12-11 10:06:11.551948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.232 [2024-12-11 10:06:11.551955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.232 [2024-12-11 10:06:11.552123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.232 [2024-12-11 10:06:11.552301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.232 [2024-12-11 10:06:11.552310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.232 [2024-12-11 10:06:11.552316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.232 [2024-12-11 10:06:11.552322] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.232 [2024-12-11 10:06:11.564505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.232 [2024-12-11 10:06:11.564866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.232 [2024-12-11 10:06:11.564912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.232 [2024-12-11 10:06:11.564936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.232 [2024-12-11 10:06:11.565423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.232 [2024-12-11 10:06:11.565593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.232 [2024-12-11 10:06:11.565602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.232 [2024-12-11 10:06:11.565608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.232 [2024-12-11 10:06:11.565614] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.232 [2024-12-11 10:06:11.577361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.232 [2024-12-11 10:06:11.577722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.232 [2024-12-11 10:06:11.577739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.232 [2024-12-11 10:06:11.577746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.232 [2024-12-11 10:06:11.577914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.232 [2024-12-11 10:06:11.578086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.232 [2024-12-11 10:06:11.578094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.232 [2024-12-11 10:06:11.578101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.232 [2024-12-11 10:06:11.578111] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.232 [2024-12-11 10:06:11.590391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.232 [2024-12-11 10:06:11.590805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.232 [2024-12-11 10:06:11.590823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.232 [2024-12-11 10:06:11.590830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.232 [2024-12-11 10:06:11.591003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.232 [2024-12-11 10:06:11.591210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.232 [2024-12-11 10:06:11.591224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.232 [2024-12-11 10:06:11.591232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.232 [2024-12-11 10:06:11.591238] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.232 [2024-12-11 10:06:11.603280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.232 [2024-12-11 10:06:11.603639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.232 [2024-12-11 10:06:11.603656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.232 [2024-12-11 10:06:11.603663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.232 [2024-12-11 10:06:11.603830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.232 [2024-12-11 10:06:11.603998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.232 [2024-12-11 10:06:11.604006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.232 [2024-12-11 10:06:11.604012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.232 [2024-12-11 10:06:11.604018] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.233 [2024-12-11 10:06:11.616095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.233 [2024-12-11 10:06:11.616461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.233 [2024-12-11 10:06:11.616478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.233 [2024-12-11 10:06:11.616485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.233 [2024-12-11 10:06:11.616653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.233 [2024-12-11 10:06:11.616820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.233 [2024-12-11 10:06:11.616828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.233 [2024-12-11 10:06:11.616834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.233 [2024-12-11 10:06:11.616840] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.233 [2024-12-11 10:06:11.628889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.233 [2024-12-11 10:06:11.629310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.233 [2024-12-11 10:06:11.629327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.233 [2024-12-11 10:06:11.629334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.233 [2024-12-11 10:06:11.629502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.233 [2024-12-11 10:06:11.629670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.233 [2024-12-11 10:06:11.629679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.233 [2024-12-11 10:06:11.629685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.233 [2024-12-11 10:06:11.629691] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.233 [2024-12-11 10:06:11.642014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.233 [2024-12-11 10:06:11.642383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.233 [2024-12-11 10:06:11.642401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.233 [2024-12-11 10:06:11.642408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.233 [2024-12-11 10:06:11.642588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.233 [2024-12-11 10:06:11.642758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.233 [2024-12-11 10:06:11.642766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.233 [2024-12-11 10:06:11.642772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.233 [2024-12-11 10:06:11.642778] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.233 [2024-12-11 10:06:11.654891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.233 [2024-12-11 10:06:11.655305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.233 [2024-12-11 10:06:11.655323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.233 [2024-12-11 10:06:11.655330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.233 [2024-12-11 10:06:11.655510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.233 [2024-12-11 10:06:11.655678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.233 [2024-12-11 10:06:11.655686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.233 [2024-12-11 10:06:11.655692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.233 [2024-12-11 10:06:11.655699] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.233 [2024-12-11 10:06:11.667713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.233 [2024-12-11 10:06:11.668090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.233 [2024-12-11 10:06:11.668107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.233 [2024-12-11 10:06:11.668114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.233 [2024-12-11 10:06:11.668290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.233 [2024-12-11 10:06:11.668457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.233 [2024-12-11 10:06:11.668465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.233 [2024-12-11 10:06:11.668471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.233 [2024-12-11 10:06:11.668478] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.233 [2024-12-11 10:06:11.680472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.233 [2024-12-11 10:06:11.680802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.233 [2024-12-11 10:06:11.680818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.233 [2024-12-11 10:06:11.680825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.233 [2024-12-11 10:06:11.680993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.233 [2024-12-11 10:06:11.681160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.233 [2024-12-11 10:06:11.681168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.233 [2024-12-11 10:06:11.681174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.233 [2024-12-11 10:06:11.681181] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.233 [2024-12-11 10:06:11.693331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.233 [2024-12-11 10:06:11.693754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.233 [2024-12-11 10:06:11.693771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.233 [2024-12-11 10:06:11.693778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.233 [2024-12-11 10:06:11.693945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.233 [2024-12-11 10:06:11.694113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.233 [2024-12-11 10:06:11.694121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.233 [2024-12-11 10:06:11.694127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.233 [2024-12-11 10:06:11.694133] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.233 [2024-12-11 10:06:11.706130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.233 [2024-12-11 10:06:11.706483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.233 [2024-12-11 10:06:11.706499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.233 [2024-12-11 10:06:11.706506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.233 [2024-12-11 10:06:11.706674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.233 [2024-12-11 10:06:11.706841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.233 [2024-12-11 10:06:11.706852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.233 [2024-12-11 10:06:11.706858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.233 [2024-12-11 10:06:11.706864] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.233 [2024-12-11 10:06:11.718962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.233 [2024-12-11 10:06:11.719379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.233 [2024-12-11 10:06:11.719395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.233 [2024-12-11 10:06:11.719403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.233 [2024-12-11 10:06:11.719570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.233 [2024-12-11 10:06:11.719743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.233 [2024-12-11 10:06:11.719751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.233 [2024-12-11 10:06:11.719757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.233 [2024-12-11 10:06:11.719763] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.233 [2024-12-11 10:06:11.731871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.233 [2024-12-11 10:06:11.732293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.233 [2024-12-11 10:06:11.732310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.233 [2024-12-11 10:06:11.732317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.233 [2024-12-11 10:06:11.732485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.233 [2024-12-11 10:06:11.732652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.233 [2024-12-11 10:06:11.732660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.234 [2024-12-11 10:06:11.732666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.234 [2024-12-11 10:06:11.732672] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.234 [2024-12-11 10:06:11.744602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.234 [2024-12-11 10:06:11.745040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.234 [2024-12-11 10:06:11.745057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.234 [2024-12-11 10:06:11.745064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.234 [2024-12-11 10:06:11.745243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.234 [2024-12-11 10:06:11.745416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.234 [2024-12-11 10:06:11.745424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.234 [2024-12-11 10:06:11.745431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.234 [2024-12-11 10:06:11.745440] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.234 [2024-12-11 10:06:11.757705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.234 [2024-12-11 10:06:11.758111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.234 [2024-12-11 10:06:11.758127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.234 [2024-12-11 10:06:11.758135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.234 [2024-12-11 10:06:11.758312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.234 [2024-12-11 10:06:11.758486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.234 [2024-12-11 10:06:11.758495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.234 [2024-12-11 10:06:11.758501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.234 [2024-12-11 10:06:11.758508] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.234 [2024-12-11 10:06:11.770714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.234 [2024-12-11 10:06:11.771139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.234 [2024-12-11 10:06:11.771155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.234 [2024-12-11 10:06:11.771162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.234 [2024-12-11 10:06:11.771341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.234 [2024-12-11 10:06:11.771514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.234 [2024-12-11 10:06:11.771522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.234 [2024-12-11 10:06:11.771529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.234 [2024-12-11 10:06:11.771535] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.234 [2024-12-11 10:06:11.783730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.234 [2024-12-11 10:06:11.784162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.234 [2024-12-11 10:06:11.784178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.234 [2024-12-11 10:06:11.784185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.234 [2024-12-11 10:06:11.784363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.234 [2024-12-11 10:06:11.784536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.234 [2024-12-11 10:06:11.784544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.234 [2024-12-11 10:06:11.784550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.234 [2024-12-11 10:06:11.784557] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.234 [2024-12-11 10:06:11.796582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.234 [2024-12-11 10:06:11.796903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.234 [2024-12-11 10:06:11.796918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.234 [2024-12-11 10:06:11.796925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.234 [2024-12-11 10:06:11.797084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.234 [2024-12-11 10:06:11.797265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.234 [2024-12-11 10:06:11.797273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.234 [2024-12-11 10:06:11.797280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.234 [2024-12-11 10:06:11.797286] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.494 [2024-12-11 10:06:11.809468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.494 [2024-12-11 10:06:11.809890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-12-11 10:06:11.809908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.494 [2024-12-11 10:06:11.809916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.494 [2024-12-11 10:06:11.810085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.494 [2024-12-11 10:06:11.810280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.494 [2024-12-11 10:06:11.810290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.494 [2024-12-11 10:06:11.810297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.494 [2024-12-11 10:06:11.810307] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.494 [2024-12-11 10:06:11.822283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.494 [2024-12-11 10:06:11.822682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-12-11 10:06:11.822698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.494 [2024-12-11 10:06:11.822705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.494 [2024-12-11 10:06:11.822864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.494 [2024-12-11 10:06:11.823023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.494 [2024-12-11 10:06:11.823030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.494 [2024-12-11 10:06:11.823036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.494 [2024-12-11 10:06:11.823043] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.494 [2024-12-11 10:06:11.835087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.494 [2024-12-11 10:06:11.835503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-12-11 10:06:11.835520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.494 [2024-12-11 10:06:11.835527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.494 [2024-12-11 10:06:11.835701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.494 [2024-12-11 10:06:11.835870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.494 [2024-12-11 10:06:11.835878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.494 [2024-12-11 10:06:11.835884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.494 [2024-12-11 10:06:11.835890] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.494 [2024-12-11 10:06:11.847831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.494 [2024-12-11 10:06:11.848226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-12-11 10:06:11.848243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.495 [2024-12-11 10:06:11.848249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.495 [2024-12-11 10:06:11.848408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.495 [2024-12-11 10:06:11.848567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.495 [2024-12-11 10:06:11.848575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.495 [2024-12-11 10:06:11.848580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.495 [2024-12-11 10:06:11.848586] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.495 [2024-12-11 10:06:11.860577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.495 [2024-12-11 10:06:11.860992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-12-11 10:06:11.861008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.495 [2024-12-11 10:06:11.861015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.495 [2024-12-11 10:06:11.861184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.495 [2024-12-11 10:06:11.861361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.495 [2024-12-11 10:06:11.861370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.495 [2024-12-11 10:06:11.861376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.495 [2024-12-11 10:06:11.861382] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.495 [2024-12-11 10:06:11.873415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.495 [2024-12-11 10:06:11.873814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-12-11 10:06:11.873859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.495 [2024-12-11 10:06:11.873883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.495 [2024-12-11 10:06:11.874483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.495 [2024-12-11 10:06:11.875040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.495 [2024-12-11 10:06:11.875050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.495 [2024-12-11 10:06:11.875057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.495 [2024-12-11 10:06:11.875063] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.495 [2024-12-11 10:06:11.886276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.495 [2024-12-11 10:06:11.886694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-12-11 10:06:11.886710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.495 [2024-12-11 10:06:11.886717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.495 [2024-12-11 10:06:11.886885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.495 [2024-12-11 10:06:11.887053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.495 [2024-12-11 10:06:11.887061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.495 [2024-12-11 10:06:11.887067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.495 [2024-12-11 10:06:11.887074] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.495 [2024-12-11 10:06:11.899004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.495 [2024-12-11 10:06:11.899425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-12-11 10:06:11.899441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.495 [2024-12-11 10:06:11.899448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.495 [2024-12-11 10:06:11.899616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.495 [2024-12-11 10:06:11.899784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.495 [2024-12-11 10:06:11.899792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.495 [2024-12-11 10:06:11.899798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.495 [2024-12-11 10:06:11.899805] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.495 [2024-12-11 10:06:11.911840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.495 [2024-12-11 10:06:11.912265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-12-11 10:06:11.912280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.495 [2024-12-11 10:06:11.912287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.495 [2024-12-11 10:06:11.912446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.495 [2024-12-11 10:06:11.912605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.495 [2024-12-11 10:06:11.912613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.495 [2024-12-11 10:06:11.912618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.495 [2024-12-11 10:06:11.912627] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.495 [2024-12-11 10:06:11.924671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.495 [2024-12-11 10:06:11.925106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-12-11 10:06:11.925150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.495 [2024-12-11 10:06:11.925174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.495 [2024-12-11 10:06:11.925772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.495 [2024-12-11 10:06:11.926209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.495 [2024-12-11 10:06:11.926221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.495 [2024-12-11 10:06:11.926227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.495 [2024-12-11 10:06:11.926234] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.495 9758.33 IOPS, 38.12 MiB/s [2024-12-11T09:06:12.070Z] [2024-12-11 10:06:11.938717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.495 [2024-12-11 10:06:11.939060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-12-11 10:06:11.939076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.495 [2024-12-11 10:06:11.939083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.495 [2024-12-11 10:06:11.939257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.495 [2024-12-11 10:06:11.939425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.495 [2024-12-11 10:06:11.939433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.495 [2024-12-11 10:06:11.939439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.495 [2024-12-11 10:06:11.939445] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.495 [2024-12-11 10:06:11.951494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.495 [2024-12-11 10:06:11.951831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-12-11 10:06:11.951847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.495 [2024-12-11 10:06:11.951854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.495 [2024-12-11 10:06:11.952022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.495 [2024-12-11 10:06:11.952190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.495 [2024-12-11 10:06:11.952198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.495 [2024-12-11 10:06:11.952204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.495 [2024-12-11 10:06:11.952210] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.495 [2024-12-11 10:06:11.964386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.495 [2024-12-11 10:06:11.964810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-12-11 10:06:11.964825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.495 [2024-12-11 10:06:11.964832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.496 [2024-12-11 10:06:11.965000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.496 [2024-12-11 10:06:11.965167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.496 [2024-12-11 10:06:11.965175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.496 [2024-12-11 10:06:11.965181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.496 [2024-12-11 10:06:11.965188] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.496 [2024-12-11 10:06:11.977206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.496 [2024-12-11 10:06:11.977539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-12-11 10:06:11.977555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.496 [2024-12-11 10:06:11.977562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.496 [2024-12-11 10:06:11.977729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.496 [2024-12-11 10:06:11.977896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.496 [2024-12-11 10:06:11.977904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.496 [2024-12-11 10:06:11.977911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.496 [2024-12-11 10:06:11.977917] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.496 [2024-12-11 10:06:11.990014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.496 [2024-12-11 10:06:11.990415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-12-11 10:06:11.990432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.496 [2024-12-11 10:06:11.990439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.496 [2024-12-11 10:06:11.990607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.496 [2024-12-11 10:06:11.990775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.496 [2024-12-11 10:06:11.990784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.496 [2024-12-11 10:06:11.990790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.496 [2024-12-11 10:06:11.990796] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.496 [2024-12-11 10:06:12.002758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.496 [2024-12-11 10:06:12.003197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-12-11 10:06:12.003213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.496 [2024-12-11 10:06:12.003231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.496 [2024-12-11 10:06:12.003420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.496 [2024-12-11 10:06:12.003592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.496 [2024-12-11 10:06:12.003600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.496 [2024-12-11 10:06:12.003607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.496 [2024-12-11 10:06:12.003614] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.496 [2024-12-11 10:06:12.015737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.496 [2024-12-11 10:06:12.016161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-12-11 10:06:12.016178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.496 [2024-12-11 10:06:12.016186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.496 [2024-12-11 10:06:12.016365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.496 [2024-12-11 10:06:12.016539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.496 [2024-12-11 10:06:12.016547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.496 [2024-12-11 10:06:12.016554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.496 [2024-12-11 10:06:12.016560] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.496 [2024-12-11 10:06:12.028748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.496 [2024-12-11 10:06:12.029157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-12-11 10:06:12.029200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.496 [2024-12-11 10:06:12.029238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.496 [2024-12-11 10:06:12.029821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.496 [2024-12-11 10:06:12.030353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.496 [2024-12-11 10:06:12.030361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.496 [2024-12-11 10:06:12.030368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.496 [2024-12-11 10:06:12.030374] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.496 [2024-12-11 10:06:12.041669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.496 [2024-12-11 10:06:12.042026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-12-11 10:06:12.042042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.496 [2024-12-11 10:06:12.042049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.496 [2024-12-11 10:06:12.042223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.496 [2024-12-11 10:06:12.042395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.496 [2024-12-11 10:06:12.042403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.496 [2024-12-11 10:06:12.042409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.496 [2024-12-11 10:06:12.042415] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.496 [2024-12-11 10:06:12.054506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.496 [2024-12-11 10:06:12.054950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-12-11 10:06:12.054993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.496 [2024-12-11 10:06:12.055016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.496 [2024-12-11 10:06:12.055494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.496 [2024-12-11 10:06:12.055668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.496 [2024-12-11 10:06:12.055677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.496 [2024-12-11 10:06:12.055683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.496 [2024-12-11 10:06:12.055690] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.756 [2024-12-11 10:06:12.067655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.756 [2024-12-11 10:06:12.068108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-12-11 10:06:12.068155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.756 [2024-12-11 10:06:12.068180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.756 [2024-12-11 10:06:12.068632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.756 [2024-12-11 10:06:12.068807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.757 [2024-12-11 10:06:12.068815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.757 [2024-12-11 10:06:12.068822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.757 [2024-12-11 10:06:12.068828] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.757 [2024-12-11 10:06:12.080433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.757 [2024-12-11 10:06:12.080853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-12-11 10:06:12.080883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.757 [2024-12-11 10:06:12.080907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.757 [2024-12-11 10:06:12.081482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.757 [2024-12-11 10:06:12.081651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.757 [2024-12-11 10:06:12.081659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.757 [2024-12-11 10:06:12.081666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.757 [2024-12-11 10:06:12.081676] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.757 [2024-12-11 10:06:12.093254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.757 [2024-12-11 10:06:12.093691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-12-11 10:06:12.093707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.757 [2024-12-11 10:06:12.093714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.757 [2024-12-11 10:06:12.093882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.757 [2024-12-11 10:06:12.094053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.757 [2024-12-11 10:06:12.094061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.757 [2024-12-11 10:06:12.094067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.757 [2024-12-11 10:06:12.094073] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.757 [2024-12-11 10:06:12.106198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.757 [2024-12-11 10:06:12.106649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-12-11 10:06:12.106665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.757 [2024-12-11 10:06:12.106672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.757 [2024-12-11 10:06:12.106831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.757 [2024-12-11 10:06:12.106989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.757 [2024-12-11 10:06:12.106997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.757 [2024-12-11 10:06:12.107003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.757 [2024-12-11 10:06:12.107008] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.757 [2024-12-11 10:06:12.118929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.757 [2024-12-11 10:06:12.119369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-12-11 10:06:12.119415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.757 [2024-12-11 10:06:12.119439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.757 [2024-12-11 10:06:12.119838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.757 [2024-12-11 10:06:12.120007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.757 [2024-12-11 10:06:12.120015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.757 [2024-12-11 10:06:12.120021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.757 [2024-12-11 10:06:12.120027] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.757 [2024-12-11 10:06:12.131757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.757 [2024-12-11 10:06:12.132192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-12-11 10:06:12.132208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.757 [2024-12-11 10:06:12.132215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.757 [2024-12-11 10:06:12.132403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.757 [2024-12-11 10:06:12.132570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.757 [2024-12-11 10:06:12.132578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.757 [2024-12-11 10:06:12.132585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.757 [2024-12-11 10:06:12.132591] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.757 [2024-12-11 10:06:12.144533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.757 [2024-12-11 10:06:12.144973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-12-11 10:06:12.144990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.757 [2024-12-11 10:06:12.144997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.757 [2024-12-11 10:06:12.145164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.757 [2024-12-11 10:06:12.145343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.757 [2024-12-11 10:06:12.145352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.757 [2024-12-11 10:06:12.145358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.757 [2024-12-11 10:06:12.145364] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.757 [2024-12-11 10:06:12.157341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.757 [2024-12-11 10:06:12.157762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-12-11 10:06:12.157777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.757 [2024-12-11 10:06:12.157785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.757 [2024-12-11 10:06:12.157952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.757 [2024-12-11 10:06:12.158120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.757 [2024-12-11 10:06:12.158128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.757 [2024-12-11 10:06:12.158134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.757 [2024-12-11 10:06:12.158140] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.757 [2024-12-11 10:06:12.170159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.757 [2024-12-11 10:06:12.170560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-12-11 10:06:12.170618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.757 [2024-12-11 10:06:12.170649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.757 [2024-12-11 10:06:12.171248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.757 [2024-12-11 10:06:12.171743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.757 [2024-12-11 10:06:12.171752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.757 [2024-12-11 10:06:12.171758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.757 [2024-12-11 10:06:12.171764] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.757 [2024-12-11 10:06:12.182950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.757 [2024-12-11 10:06:12.183378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-12-11 10:06:12.183423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.757 [2024-12-11 10:06:12.183447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.757 [2024-12-11 10:06:12.183943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.757 [2024-12-11 10:06:12.184111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.757 [2024-12-11 10:06:12.184119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.757 [2024-12-11 10:06:12.184125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.757 [2024-12-11 10:06:12.184132] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.757 [2024-12-11 10:06:12.195831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.757 [2024-12-11 10:06:12.196275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-12-11 10:06:12.196320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.757 [2024-12-11 10:06:12.196344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.757 [2024-12-11 10:06:12.196820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.757 [2024-12-11 10:06:12.196988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.758 [2024-12-11 10:06:12.196996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.758 [2024-12-11 10:06:12.197003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.758 [2024-12-11 10:06:12.197010] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.758 [2024-12-11 10:06:12.208773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.758 [2024-12-11 10:06:12.209187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-12-11 10:06:12.209204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.758 [2024-12-11 10:06:12.209211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.758 [2024-12-11 10:06:12.209406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.758 [2024-12-11 10:06:12.209583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.758 [2024-12-11 10:06:12.209591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.758 [2024-12-11 10:06:12.209597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.758 [2024-12-11 10:06:12.209603] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.758 [2024-12-11 10:06:12.221552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.758 [2024-12-11 10:06:12.221997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-12-11 10:06:12.222014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.758 [2024-12-11 10:06:12.222021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.758 [2024-12-11 10:06:12.222189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.758 [2024-12-11 10:06:12.222380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.758 [2024-12-11 10:06:12.222389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.758 [2024-12-11 10:06:12.222396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.758 [2024-12-11 10:06:12.222402] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.758 [2024-12-11 10:06:12.234435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.758 [2024-12-11 10:06:12.234862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-12-11 10:06:12.234907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.758 [2024-12-11 10:06:12.234931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.758 [2024-12-11 10:06:12.235527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.758 [2024-12-11 10:06:12.235938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.758 [2024-12-11 10:06:12.235946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.758 [2024-12-11 10:06:12.235952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.758 [2024-12-11 10:06:12.235958] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.758 [2024-12-11 10:06:12.247415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.758 [2024-12-11 10:06:12.247833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-12-11 10:06:12.247850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.758 [2024-12-11 10:06:12.247858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.758 [2024-12-11 10:06:12.248026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.758 [2024-12-11 10:06:12.248196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.758 [2024-12-11 10:06:12.248206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.758 [2024-12-11 10:06:12.248214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.758 [2024-12-11 10:06:12.248229] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.758 [2024-12-11 10:06:12.260311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.758 [2024-12-11 10:06:12.260710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-12-11 10:06:12.260727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.758 [2024-12-11 10:06:12.260735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.758 [2024-12-11 10:06:12.260908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.758 [2024-12-11 10:06:12.261080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.758 [2024-12-11 10:06:12.261089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.758 [2024-12-11 10:06:12.261096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.758 [2024-12-11 10:06:12.261103] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.758 [2024-12-11 10:06:12.273323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.758 [2024-12-11 10:06:12.273685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-12-11 10:06:12.273702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.758 [2024-12-11 10:06:12.273709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.758 [2024-12-11 10:06:12.273882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.758 [2024-12-11 10:06:12.274055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.758 [2024-12-11 10:06:12.274063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.758 [2024-12-11 10:06:12.274069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.758 [2024-12-11 10:06:12.274076] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.758 [2024-12-11 10:06:12.286260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.758 [2024-12-11 10:06:12.286622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-12-11 10:06:12.286666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.758 [2024-12-11 10:06:12.286690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.758 [2024-12-11 10:06:12.287148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.758 [2024-12-11 10:06:12.287329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.758 [2024-12-11 10:06:12.287338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.758 [2024-12-11 10:06:12.287344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.758 [2024-12-11 10:06:12.287350] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.758 [2024-12-11 10:06:12.299094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.758 [2024-12-11 10:06:12.299553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-12-11 10:06:12.299598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.758 [2024-12-11 10:06:12.299621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.758 [2024-12-11 10:06:12.300003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.758 [2024-12-11 10:06:12.300172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.758 [2024-12-11 10:06:12.300180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.758 [2024-12-11 10:06:12.300186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.758 [2024-12-11 10:06:12.300192] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.758 [2024-12-11 10:06:12.311826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.758 [2024-12-11 10:06:12.312239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-12-11 10:06:12.312255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.758 [2024-12-11 10:06:12.312262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.758 [2024-12-11 10:06:12.312420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.758 [2024-12-11 10:06:12.312578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.758 [2024-12-11 10:06:12.312586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.758 [2024-12-11 10:06:12.312592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.758 [2024-12-11 10:06:12.312598] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:02.758 [2024-12-11 10:06:12.324620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:02.758 [2024-12-11 10:06:12.325061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-12-11 10:06:12.325080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:02.759 [2024-12-11 10:06:12.325087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:02.759 [2024-12-11 10:06:12.325282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:02.759 [2024-12-11 10:06:12.325457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:02.759 [2024-12-11 10:06:12.325466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:02.759 [2024-12-11 10:06:12.325473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:02.759 [2024-12-11 10:06:12.325479] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.019 [2024-12-11 10:06:12.337647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.019 [2024-12-11 10:06:12.338081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.019 [2024-12-11 10:06:12.338097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.019 [2024-12-11 10:06:12.338108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.019 [2024-12-11 10:06:12.338291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.019 [2024-12-11 10:06:12.338460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.019 [2024-12-11 10:06:12.338468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.019 [2024-12-11 10:06:12.338474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.019 [2024-12-11 10:06:12.338480] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.019 [2024-12-11 10:06:12.350409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.019 [2024-12-11 10:06:12.350845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.019 [2024-12-11 10:06:12.350890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.019 [2024-12-11 10:06:12.350914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.019 [2024-12-11 10:06:12.351512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.019 [2024-12-11 10:06:12.352099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.019 [2024-12-11 10:06:12.352125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.019 [2024-12-11 10:06:12.352149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.019 [2024-12-11 10:06:12.352156] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.019 [2024-12-11 10:06:12.363142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.019 [2024-12-11 10:06:12.363594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.019 [2024-12-11 10:06:12.363639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.019 [2024-12-11 10:06:12.363663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.019 [2024-12-11 10:06:12.364193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.019 [2024-12-11 10:06:12.364367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.019 [2024-12-11 10:06:12.364376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.019 [2024-12-11 10:06:12.364382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.019 [2024-12-11 10:06:12.364388] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.019 [2024-12-11 10:06:12.375920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.019 [2024-12-11 10:06:12.376205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.019 [2024-12-11 10:06:12.376226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.019 [2024-12-11 10:06:12.376233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.019 [2024-12-11 10:06:12.376414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.019 [2024-12-11 10:06:12.376587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.019 [2024-12-11 10:06:12.376598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.019 [2024-12-11 10:06:12.376604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.019 [2024-12-11 10:06:12.376610] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.019 [2024-12-11 10:06:12.388642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.019 [2024-12-11 10:06:12.389074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.019 [2024-12-11 10:06:12.389118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.019 [2024-12-11 10:06:12.389142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.019 [2024-12-11 10:06:12.389722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.019 [2024-12-11 10:06:12.390112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.019 [2024-12-11 10:06:12.390129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.019 [2024-12-11 10:06:12.390144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.019 [2024-12-11 10:06:12.390157] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.019 [2024-12-11 10:06:12.403369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.019 [2024-12-11 10:06:12.403896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.019 [2024-12-11 10:06:12.403940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.019 [2024-12-11 10:06:12.403964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.019 [2024-12-11 10:06:12.404465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.019 [2024-12-11 10:06:12.404721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.019 [2024-12-11 10:06:12.404732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.019 [2024-12-11 10:06:12.404741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.019 [2024-12-11 10:06:12.404750] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.019 [2024-12-11 10:06:12.416371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.019 [2024-12-11 10:06:12.416799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.019 [2024-12-11 10:06:12.416835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.019 [2024-12-11 10:06:12.416860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.019 [2024-12-11 10:06:12.417393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.019 [2024-12-11 10:06:12.417562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.019 [2024-12-11 10:06:12.417570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.019 [2024-12-11 10:06:12.417577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.019 [2024-12-11 10:06:12.417586] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.019 [2024-12-11 10:06:12.429223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.019 [2024-12-11 10:06:12.429575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.019 [2024-12-11 10:06:12.429592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.019 [2024-12-11 10:06:12.429599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.019 [2024-12-11 10:06:12.429767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.019 [2024-12-11 10:06:12.429936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.019 [2024-12-11 10:06:12.429945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.019 [2024-12-11 10:06:12.429950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.019 [2024-12-11 10:06:12.429957] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.019 [2024-12-11 10:06:12.442059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.019 [2024-12-11 10:06:12.442470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.019 [2024-12-11 10:06:12.442487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.019 [2024-12-11 10:06:12.442495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.019 [2024-12-11 10:06:12.442663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.020 [2024-12-11 10:06:12.442830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.020 [2024-12-11 10:06:12.442838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.020 [2024-12-11 10:06:12.442844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.020 [2024-12-11 10:06:12.442851] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.020 [2024-12-11 10:06:12.454899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.020 [2024-12-11 10:06:12.455317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.020 [2024-12-11 10:06:12.455334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.020 [2024-12-11 10:06:12.455342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.020 [2024-12-11 10:06:12.455510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.020 [2024-12-11 10:06:12.455677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.020 [2024-12-11 10:06:12.455685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.020 [2024-12-11 10:06:12.455691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.020 [2024-12-11 10:06:12.455697] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.020 [2024-12-11 10:06:12.467667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.020 [2024-12-11 10:06:12.468091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.020 [2024-12-11 10:06:12.468106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.020 [2024-12-11 10:06:12.468113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.020 [2024-12-11 10:06:12.468295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.020 [2024-12-11 10:06:12.468463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.020 [2024-12-11 10:06:12.468471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.020 [2024-12-11 10:06:12.468477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.020 [2024-12-11 10:06:12.468483] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.020 [2024-12-11 10:06:12.480465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.020 [2024-12-11 10:06:12.480804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.020 [2024-12-11 10:06:12.480820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.020 [2024-12-11 10:06:12.480827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.020 [2024-12-11 10:06:12.480986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.020 [2024-12-11 10:06:12.481145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.020 [2024-12-11 10:06:12.481152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.020 [2024-12-11 10:06:12.481158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.020 [2024-12-11 10:06:12.481164] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.020 [2024-12-11 10:06:12.493295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.020 [2024-12-11 10:06:12.493729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.020 [2024-12-11 10:06:12.493774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.020 [2024-12-11 10:06:12.493797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.020 [2024-12-11 10:06:12.494284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.020 [2024-12-11 10:06:12.494458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.020 [2024-12-11 10:06:12.494466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.020 [2024-12-11 10:06:12.494473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.020 [2024-12-11 10:06:12.494479] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.020 [2024-12-11 10:06:12.506133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.020 [2024-12-11 10:06:12.506566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.020 [2024-12-11 10:06:12.506583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.020 [2024-12-11 10:06:12.506594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.020 [2024-12-11 10:06:12.506762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.020 [2024-12-11 10:06:12.506931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.020 [2024-12-11 10:06:12.506939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.020 [2024-12-11 10:06:12.506945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.020 [2024-12-11 10:06:12.506951] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.020 [2024-12-11 10:06:12.518998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.020 [2024-12-11 10:06:12.519426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.020 [2024-12-11 10:06:12.519443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.020 [2024-12-11 10:06:12.519451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.020 [2024-12-11 10:06:12.519619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.020 [2024-12-11 10:06:12.519787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.020 [2024-12-11 10:06:12.519796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.020 [2024-12-11 10:06:12.519802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.020 [2024-12-11 10:06:12.519808] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.020 [2024-12-11 10:06:12.532123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.020 [2024-12-11 10:06:12.532473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.020 [2024-12-11 10:06:12.532490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.020 [2024-12-11 10:06:12.532497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.020 [2024-12-11 10:06:12.532669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.020 [2024-12-11 10:06:12.532841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.020 [2024-12-11 10:06:12.532850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.020 [2024-12-11 10:06:12.532857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.020 [2024-12-11 10:06:12.532863] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.020 [2024-12-11 10:06:12.544979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.020 [2024-12-11 10:06:12.545411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.020 [2024-12-11 10:06:12.545456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.020 [2024-12-11 10:06:12.545479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.020 [2024-12-11 10:06:12.546062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.020 [2024-12-11 10:06:12.546595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.020 [2024-12-11 10:06:12.546606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.020 [2024-12-11 10:06:12.546613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.020 [2024-12-11 10:06:12.546619] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.020 [2024-12-11 10:06:12.557768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.020 [2024-12-11 10:06:12.558192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.020 [2024-12-11 10:06:12.558248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.020 [2024-12-11 10:06:12.558273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.020 [2024-12-11 10:06:12.558705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.020 [2024-12-11 10:06:12.558874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.020 [2024-12-11 10:06:12.558882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.020 [2024-12-11 10:06:12.558888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.020 [2024-12-11 10:06:12.558894] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.020 [2024-12-11 10:06:12.570550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.020 [2024-12-11 10:06:12.570964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.020 [2024-12-11 10:06:12.570980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.020 [2024-12-11 10:06:12.570987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.020 [2024-12-11 10:06:12.571145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.020 [2024-12-11 10:06:12.571328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.021 [2024-12-11 10:06:12.571337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.021 [2024-12-11 10:06:12.571343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.021 [2024-12-11 10:06:12.571350] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.021 [2024-12-11 10:06:12.583283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.021 [2024-12-11 10:06:12.583678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.021 [2024-12-11 10:06:12.583694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.021 [2024-12-11 10:06:12.583700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.021 [2024-12-11 10:06:12.583859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.021 [2024-12-11 10:06:12.584018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.021 [2024-12-11 10:06:12.584025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.021 [2024-12-11 10:06:12.584031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.021 [2024-12-11 10:06:12.584040] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.280 [2024-12-11 10:06:12.596114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.280 [2024-12-11 10:06:12.596479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.280 [2024-12-11 10:06:12.596498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.281 [2024-12-11 10:06:12.596506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.281 [2024-12-11 10:06:12.596680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.281 [2024-12-11 10:06:12.596857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.281 [2024-12-11 10:06:12.596867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.281 [2024-12-11 10:06:12.596875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.281 [2024-12-11 10:06:12.596885] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.281 [2024-12-11 10:06:12.608895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.281 [2024-12-11 10:06:12.609328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.281 [2024-12-11 10:06:12.609375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.281 [2024-12-11 10:06:12.609399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.281 [2024-12-11 10:06:12.609901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.281 [2024-12-11 10:06:12.610060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.281 [2024-12-11 10:06:12.610068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.281 [2024-12-11 10:06:12.610074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.281 [2024-12-11 10:06:12.610079] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.281 [2024-12-11 10:06:12.621690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.281 [2024-12-11 10:06:12.622104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.281 [2024-12-11 10:06:12.622120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.281 [2024-12-11 10:06:12.622126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.281 [2024-12-11 10:06:12.622309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.281 [2024-12-11 10:06:12.622477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.281 [2024-12-11 10:06:12.622485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.281 [2024-12-11 10:06:12.622491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.281 [2024-12-11 10:06:12.622497] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.281 [2024-12-11 10:06:12.634552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.281 [2024-12-11 10:06:12.634956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.281 [2024-12-11 10:06:12.634971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.281 [2024-12-11 10:06:12.634978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.281 [2024-12-11 10:06:12.635142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.281 [2024-12-11 10:06:12.635328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.281 [2024-12-11 10:06:12.635336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.281 [2024-12-11 10:06:12.635342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.281 [2024-12-11 10:06:12.635349] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.281 [2024-12-11 10:06:12.647435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.281 [2024-12-11 10:06:12.647868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.281 [2024-12-11 10:06:12.647884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.281 [2024-12-11 10:06:12.647891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.281 [2024-12-11 10:06:12.648048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.281 [2024-12-11 10:06:12.648207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.281 [2024-12-11 10:06:12.648215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.281 [2024-12-11 10:06:12.648225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.281 [2024-12-11 10:06:12.648231] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.281 [2024-12-11 10:06:12.660407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.281 [2024-12-11 10:06:12.660855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.281 [2024-12-11 10:06:12.660903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.281 [2024-12-11 10:06:12.660927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.281 [2024-12-11 10:06:12.661522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.281 [2024-12-11 10:06:12.662005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.281 [2024-12-11 10:06:12.662013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.281 [2024-12-11 10:06:12.662020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.281 [2024-12-11 10:06:12.662026] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.281 [2024-12-11 10:06:12.675804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.281 [2024-12-11 10:06:12.676315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.281 [2024-12-11 10:06:12.676337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.281 [2024-12-11 10:06:12.676351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.281 [2024-12-11 10:06:12.676604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.281 [2024-12-11 10:06:12.676859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.281 [2024-12-11 10:06:12.676870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.281 [2024-12-11 10:06:12.676879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.281 [2024-12-11 10:06:12.676888] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.281 [2024-12-11 10:06:12.688753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.281 [2024-12-11 10:06:12.689115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.281 [2024-12-11 10:06:12.689159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.281 [2024-12-11 10:06:12.689182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.281 [2024-12-11 10:06:12.689757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.281 [2024-12-11 10:06:12.689926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.281 [2024-12-11 10:06:12.689935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.281 [2024-12-11 10:06:12.689942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.281 [2024-12-11 10:06:12.689948] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.281 [2024-12-11 10:06:12.701622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.281 [2024-12-11 10:06:12.702018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.281 [2024-12-11 10:06:12.702034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.281 [2024-12-11 10:06:12.702041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.281 [2024-12-11 10:06:12.702209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.281 [2024-12-11 10:06:12.702382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.281 [2024-12-11 10:06:12.702391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.281 [2024-12-11 10:06:12.702398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.281 [2024-12-11 10:06:12.702404] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.281 [2024-12-11 10:06:12.714408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.281 [2024-12-11 10:06:12.714758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.281 [2024-12-11 10:06:12.714802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.281 [2024-12-11 10:06:12.714825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.281 [2024-12-11 10:06:12.715420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.281 [2024-12-11 10:06:12.716000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.281 [2024-12-11 10:06:12.716013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.281 [2024-12-11 10:06:12.716022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.281 [2024-12-11 10:06:12.716029] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.281 [2024-12-11 10:06:12.727372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.281 [2024-12-11 10:06:12.727733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.281 [2024-12-11 10:06:12.727777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.281 [2024-12-11 10:06:12.727801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.282 [2024-12-11 10:06:12.728401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.282 [2024-12-11 10:06:12.728991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.282 [2024-12-11 10:06:12.729016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.282 [2024-12-11 10:06:12.729038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.282 [2024-12-11 10:06:12.729065] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.282 [2024-12-11 10:06:12.740238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.282 [2024-12-11 10:06:12.740567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.282 [2024-12-11 10:06:12.740583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.282 [2024-12-11 10:06:12.740590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.282 [2024-12-11 10:06:12.740758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.282 [2024-12-11 10:06:12.740926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.282 [2024-12-11 10:06:12.740935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.282 [2024-12-11 10:06:12.740941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.282 [2024-12-11 10:06:12.740947] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.282 [2024-12-11 10:06:12.753176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.282 [2024-12-11 10:06:12.753462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.282 [2024-12-11 10:06:12.753481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.282 [2024-12-11 10:06:12.753487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.282 [2024-12-11 10:06:12.753646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.282 [2024-12-11 10:06:12.753804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.282 [2024-12-11 10:06:12.753811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.282 [2024-12-11 10:06:12.753817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.282 [2024-12-11 10:06:12.753826] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.282 [2024-12-11 10:06:12.766051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.282 [2024-12-11 10:06:12.766358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.282 [2024-12-11 10:06:12.766375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.282 [2024-12-11 10:06:12.766383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.282 [2024-12-11 10:06:12.766551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.282 [2024-12-11 10:06:12.766719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.282 [2024-12-11 10:06:12.766726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.282 [2024-12-11 10:06:12.766733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.282 [2024-12-11 10:06:12.766739] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.282 [2024-12-11 10:06:12.778952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.282 [2024-12-11 10:06:12.779248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.282 [2024-12-11 10:06:12.779265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.282 [2024-12-11 10:06:12.779272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.282 [2024-12-11 10:06:12.779462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.282 [2024-12-11 10:06:12.779636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.282 [2024-12-11 10:06:12.779645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.282 [2024-12-11 10:06:12.779651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.282 [2024-12-11 10:06:12.779658] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.282 [2024-12-11 10:06:12.792089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.282 [2024-12-11 10:06:12.792530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.282 [2024-12-11 10:06:12.792547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.282 [2024-12-11 10:06:12.792555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.282 [2024-12-11 10:06:12.792728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.282 [2024-12-11 10:06:12.792902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.282 [2024-12-11 10:06:12.792911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.282 [2024-12-11 10:06:12.792917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.282 [2024-12-11 10:06:12.792924] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.282 [2024-12-11 10:06:12.805048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.282 [2024-12-11 10:06:12.805383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.282 [2024-12-11 10:06:12.805400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.282 [2024-12-11 10:06:12.805407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.282 [2024-12-11 10:06:12.805575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.282 [2024-12-11 10:06:12.805742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.282 [2024-12-11 10:06:12.805750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.282 [2024-12-11 10:06:12.805756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.282 [2024-12-11 10:06:12.805762] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.282 [2024-12-11 10:06:12.817953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.282 [2024-12-11 10:06:12.818247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.282 [2024-12-11 10:06:12.818264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.282 [2024-12-11 10:06:12.818271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.282 [2024-12-11 10:06:12.818447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.282 [2024-12-11 10:06:12.818606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.282 [2024-12-11 10:06:12.818614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.282 [2024-12-11 10:06:12.818620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.282 [2024-12-11 10:06:12.818626] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.282 [2024-12-11 10:06:12.830930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.282 [2024-12-11 10:06:12.831351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.282 [2024-12-11 10:06:12.831368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.282 [2024-12-11 10:06:12.831376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.282 [2024-12-11 10:06:12.831566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.282 [2024-12-11 10:06:12.831735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.282 [2024-12-11 10:06:12.831743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.282 [2024-12-11 10:06:12.831749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.282 [2024-12-11 10:06:12.831755] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.282 [2024-12-11 10:06:12.843849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.282 [2024-12-11 10:06:12.844200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.282 [2024-12-11 10:06:12.844220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.282 [2024-12-11 10:06:12.844227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.282 [2024-12-11 10:06:12.844398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.282 [2024-12-11 10:06:12.844566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.282 [2024-12-11 10:06:12.844574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.282 [2024-12-11 10:06:12.844580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.282 [2024-12-11 10:06:12.844586] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.543 [2024-12-11 10:06:12.856900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.543 [2024-12-11 10:06:12.857256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-12-11 10:06:12.857274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.543 [2024-12-11 10:06:12.857282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.543 [2024-12-11 10:06:12.857459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.543 [2024-12-11 10:06:12.857630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.543 [2024-12-11 10:06:12.857638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.543 [2024-12-11 10:06:12.857644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.543 [2024-12-11 10:06:12.857650] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.543 [2024-12-11 10:06:12.869692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.543 [2024-12-11 10:06:12.869970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-12-11 10:06:12.869986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.543 [2024-12-11 10:06:12.869994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.543 [2024-12-11 10:06:12.870162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.543 [2024-12-11 10:06:12.870337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.543 [2024-12-11 10:06:12.870346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.543 [2024-12-11 10:06:12.870353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.543 [2024-12-11 10:06:12.870359] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.543 [2024-12-11 10:06:12.882603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.543 [2024-12-11 10:06:12.882884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-12-11 10:06:12.882900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.543 [2024-12-11 10:06:12.882907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.543 [2024-12-11 10:06:12.883074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.543 [2024-12-11 10:06:12.883251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.543 [2024-12-11 10:06:12.883264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.543 [2024-12-11 10:06:12.883270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.543 [2024-12-11 10:06:12.883276] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.543 [2024-12-11 10:06:12.895590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.543 [2024-12-11 10:06:12.895888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-12-11 10:06:12.895904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.543 [2024-12-11 10:06:12.895911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.543 [2024-12-11 10:06:12.896079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.543 [2024-12-11 10:06:12.896253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.543 [2024-12-11 10:06:12.896262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.543 [2024-12-11 10:06:12.896268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.543 [2024-12-11 10:06:12.896274] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.543 [2024-12-11 10:06:12.908379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.543 [2024-12-11 10:06:12.908731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-12-11 10:06:12.908747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.543 [2024-12-11 10:06:12.908754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.543 [2024-12-11 10:06:12.908921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.543 [2024-12-11 10:06:12.909089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.543 [2024-12-11 10:06:12.909097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.543 [2024-12-11 10:06:12.909103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.543 [2024-12-11 10:06:12.909109] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.543 [2024-12-11 10:06:12.921315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.543 [2024-12-11 10:06:12.921676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-12-11 10:06:12.921720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.543 [2024-12-11 10:06:12.921743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.543 [2024-12-11 10:06:12.922264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.543 [2024-12-11 10:06:12.922433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.543 [2024-12-11 10:06:12.922441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.543 [2024-12-11 10:06:12.922447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.543 [2024-12-11 10:06:12.922456] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.543 [2024-12-11 10:06:12.934282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.543 [2024-12-11 10:06:12.934626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-12-11 10:06:12.934643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.543 [2024-12-11 10:06:12.934650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.543 [2024-12-11 10:06:12.934817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.543 [2024-12-11 10:06:12.934984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.543 [2024-12-11 10:06:12.934992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.543 [2024-12-11 10:06:12.934998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.543 [2024-12-11 10:06:12.935004] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.543 7318.75 IOPS, 28.59 MiB/s [2024-12-11T09:06:13.118Z] [2024-12-11 10:06:12.947141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.543 [2024-12-11 10:06:12.947457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-12-11 10:06:12.947475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.543 [2024-12-11 10:06:12.947482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.543 [2024-12-11 10:06:12.947650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.543 [2024-12-11 10:06:12.947822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.543 [2024-12-11 10:06:12.947830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.543 [2024-12-11 10:06:12.947836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.543 [2024-12-11 10:06:12.947843] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.543 [2024-12-11 10:06:12.959983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.543 [2024-12-11 10:06:12.960328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-12-11 10:06:12.960345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.543 [2024-12-11 10:06:12.960352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.543 [2024-12-11 10:06:12.960521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.543 [2024-12-11 10:06:12.960689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.543 [2024-12-11 10:06:12.960697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.543 [2024-12-11 10:06:12.960703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.543 [2024-12-11 10:06:12.960709] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.543 [2024-12-11 10:06:12.972908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.543 [2024-12-11 10:06:12.973280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-12-11 10:06:12.973297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.543 [2024-12-11 10:06:12.973304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.543 [2024-12-11 10:06:12.973486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.543 [2024-12-11 10:06:12.973655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.544 [2024-12-11 10:06:12.973663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.544 [2024-12-11 10:06:12.973669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.544 [2024-12-11 10:06:12.973675] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.544 [2024-12-11 10:06:12.985802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.544 [2024-12-11 10:06:12.986070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-12-11 10:06:12.986087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.544 [2024-12-11 10:06:12.986094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.544 [2024-12-11 10:06:12.986267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.544 [2024-12-11 10:06:12.986435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.544 [2024-12-11 10:06:12.986444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.544 [2024-12-11 10:06:12.986450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.544 [2024-12-11 10:06:12.986456] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.544 [2024-12-11 10:06:12.998700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.544 [2024-12-11 10:06:12.999026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-12-11 10:06:12.999041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.544 [2024-12-11 10:06:12.999048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.544 [2024-12-11 10:06:12.999223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.544 [2024-12-11 10:06:12.999392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.544 [2024-12-11 10:06:12.999399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.544 [2024-12-11 10:06:12.999405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.544 [2024-12-11 10:06:12.999411] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.544 [2024-12-11 10:06:13.011529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.544 [2024-12-11 10:06:13.011873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-12-11 10:06:13.011889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.544 [2024-12-11 10:06:13.011900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.544 [2024-12-11 10:06:13.012068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.544 [2024-12-11 10:06:13.012241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.544 [2024-12-11 10:06:13.012249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.544 [2024-12-11 10:06:13.012256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.544 [2024-12-11 10:06:13.012262] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.544 [2024-12-11 10:06:13.024410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.544 [2024-12-11 10:06:13.024782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-12-11 10:06:13.024798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.544 [2024-12-11 10:06:13.024806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.544 [2024-12-11 10:06:13.024978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.544 [2024-12-11 10:06:13.025150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.544 [2024-12-11 10:06:13.025158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.544 [2024-12-11 10:06:13.025165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.544 [2024-12-11 10:06:13.025171] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.544 [2024-12-11 10:06:13.037149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.544 [2024-12-11 10:06:13.037453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-12-11 10:06:13.037469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.544 [2024-12-11 10:06:13.037477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.544 [2024-12-11 10:06:13.037649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.544 [2024-12-11 10:06:13.037821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.544 [2024-12-11 10:06:13.037829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.544 [2024-12-11 10:06:13.037836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.544 [2024-12-11 10:06:13.037842] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.544 [2024-12-11 10:06:13.050270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.544 [2024-12-11 10:06:13.050561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-12-11 10:06:13.050578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.544 [2024-12-11 10:06:13.050586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.544 [2024-12-11 10:06:13.050758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.544 [2024-12-11 10:06:13.050935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.544 [2024-12-11 10:06:13.050944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.544 [2024-12-11 10:06:13.050952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.544 [2024-12-11 10:06:13.050958] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.544 [2024-12-11 10:06:13.063156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.544 [2024-12-11 10:06:13.063471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-12-11 10:06:13.063489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.544 [2024-12-11 10:06:13.063496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.544 [2024-12-11 10:06:13.063664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.544 [2024-12-11 10:06:13.063832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.544 [2024-12-11 10:06:13.063840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.544 [2024-12-11 10:06:13.063846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.544 [2024-12-11 10:06:13.063852] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.544 [2024-12-11 10:06:13.076046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.544 [2024-12-11 10:06:13.076400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-12-11 10:06:13.076416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.544 [2024-12-11 10:06:13.076424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.544 [2024-12-11 10:06:13.076592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.544 [2024-12-11 10:06:13.076759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.544 [2024-12-11 10:06:13.076767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.544 [2024-12-11 10:06:13.076774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.544 [2024-12-11 10:06:13.076780] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.544 [2024-12-11 10:06:13.088905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.544 [2024-12-11 10:06:13.089308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-12-11 10:06:13.089325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.544 [2024-12-11 10:06:13.089332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.544 [2024-12-11 10:06:13.089499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.544 [2024-12-11 10:06:13.089667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.544 [2024-12-11 10:06:13.089675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.544 [2024-12-11 10:06:13.089681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.544 [2024-12-11 10:06:13.089692] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.544 [2024-12-11 10:06:13.101763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.544 [2024-12-11 10:06:13.102163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-12-11 10:06:13.102179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.544 [2024-12-11 10:06:13.102186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.544 [2024-12-11 10:06:13.102360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.544 [2024-12-11 10:06:13.102528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.545 [2024-12-11 10:06:13.102536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.545 [2024-12-11 10:06:13.102542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.545 [2024-12-11 10:06:13.102548] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.545 [2024-12-11 10:06:13.114902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.545 [2024-12-11 10:06:13.115331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-12-11 10:06:13.115350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.545 [2024-12-11 10:06:13.115359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.545 [2024-12-11 10:06:13.115534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.545 [2024-12-11 10:06:13.115708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.545 [2024-12-11 10:06:13.115716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.545 [2024-12-11 10:06:13.115723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.545 [2024-12-11 10:06:13.115729] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.804 [2024-12-11 10:06:13.127932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.804 [2024-12-11 10:06:13.128354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.804 [2024-12-11 10:06:13.128372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.804 [2024-12-11 10:06:13.128379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.805 [2024-12-11 10:06:13.128550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.805 [2024-12-11 10:06:13.128709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.805 [2024-12-11 10:06:13.128716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.805 [2024-12-11 10:06:13.128722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.805 [2024-12-11 10:06:13.128728] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.805 [2024-12-11 10:06:13.140873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.805 [2024-12-11 10:06:13.141299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.805 [2024-12-11 10:06:13.141316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.805 [2024-12-11 10:06:13.141324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.805 [2024-12-11 10:06:13.141492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.805 [2024-12-11 10:06:13.141661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.805 [2024-12-11 10:06:13.141669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.805 [2024-12-11 10:06:13.141675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.805 [2024-12-11 10:06:13.141681] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.805 [2024-12-11 10:06:13.153733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.805 [2024-12-11 10:06:13.154134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.805 [2024-12-11 10:06:13.154150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.805 [2024-12-11 10:06:13.154157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.805 [2024-12-11 10:06:13.154344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.805 [2024-12-11 10:06:13.154512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.805 [2024-12-11 10:06:13.154521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.805 [2024-12-11 10:06:13.154527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.805 [2024-12-11 10:06:13.154533] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.805 [2024-12-11 10:06:13.166550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.805 [2024-12-11 10:06:13.166949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.805 [2024-12-11 10:06:13.166966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.805 [2024-12-11 10:06:13.166973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.805 [2024-12-11 10:06:13.167141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.805 [2024-12-11 10:06:13.167332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.805 [2024-12-11 10:06:13.167341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.805 [2024-12-11 10:06:13.167348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.805 [2024-12-11 10:06:13.167354] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.805 [2024-12-11 10:06:13.179397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.805 [2024-12-11 10:06:13.179790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.805 [2024-12-11 10:06:13.179806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.805 [2024-12-11 10:06:13.179816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.805 [2024-12-11 10:06:13.179975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.805 [2024-12-11 10:06:13.180134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.805 [2024-12-11 10:06:13.180141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.805 [2024-12-11 10:06:13.180147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.805 [2024-12-11 10:06:13.180153] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.805 [2024-12-11 10:06:13.192148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.805 [2024-12-11 10:06:13.192557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.805 [2024-12-11 10:06:13.192574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.805 [2024-12-11 10:06:13.192581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.805 [2024-12-11 10:06:13.192749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.805 [2024-12-11 10:06:13.192916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.805 [2024-12-11 10:06:13.192925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.805 [2024-12-11 10:06:13.192931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.805 [2024-12-11 10:06:13.192937] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.805 [2024-12-11 10:06:13.205014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.805 [2024-12-11 10:06:13.205448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.805 [2024-12-11 10:06:13.205464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.805 [2024-12-11 10:06:13.205471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.805 [2024-12-11 10:06:13.205639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.805 [2024-12-11 10:06:13.205807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.805 [2024-12-11 10:06:13.205815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.805 [2024-12-11 10:06:13.205821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.805 [2024-12-11 10:06:13.205827] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.805 [2024-12-11 10:06:13.217836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.805 [2024-12-11 10:06:13.218248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.805 [2024-12-11 10:06:13.218265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.805 [2024-12-11 10:06:13.218272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.805 [2024-12-11 10:06:13.218441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.805 [2024-12-11 10:06:13.218612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.805 [2024-12-11 10:06:13.218620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.805 [2024-12-11 10:06:13.218626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.805 [2024-12-11 10:06:13.218632] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.805 [2024-12-11 10:06:13.230780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.805 [2024-12-11 10:06:13.231225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.805 [2024-12-11 10:06:13.231241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.805 [2024-12-11 10:06:13.231248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.805 [2024-12-11 10:06:13.231415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.805 [2024-12-11 10:06:13.231584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.805 [2024-12-11 10:06:13.231592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.805 [2024-12-11 10:06:13.231598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.805 [2024-12-11 10:06:13.231604] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.805 [2024-12-11 10:06:13.243759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.805 [2024-12-11 10:06:13.244135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.805 [2024-12-11 10:06:13.244151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.805 [2024-12-11 10:06:13.244159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.805 [2024-12-11 10:06:13.244334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.805 [2024-12-11 10:06:13.244501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.805 [2024-12-11 10:06:13.244509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.805 [2024-12-11 10:06:13.244516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.805 [2024-12-11 10:06:13.244522] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.805 [2024-12-11 10:06:13.256593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.805 [2024-12-11 10:06:13.257037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.805 [2024-12-11 10:06:13.257082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.805 [2024-12-11 10:06:13.257107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.806 [2024-12-11 10:06:13.257709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.806 [2024-12-11 10:06:13.258308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.806 [2024-12-11 10:06:13.258334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.806 [2024-12-11 10:06:13.258366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.806 [2024-12-11 10:06:13.258376] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.806 [2024-12-11 10:06:13.269497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.806 [2024-12-11 10:06:13.269884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.806 [2024-12-11 10:06:13.269929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.806 [2024-12-11 10:06:13.269951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.806 [2024-12-11 10:06:13.270442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.806 [2024-12-11 10:06:13.270612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.806 [2024-12-11 10:06:13.270620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.806 [2024-12-11 10:06:13.270626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.806 [2024-12-11 10:06:13.270632] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.806 [2024-12-11 10:06:13.282518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.806 [2024-12-11 10:06:13.282936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.806 [2024-12-11 10:06:13.282953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.806 [2024-12-11 10:06:13.282960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.806 [2024-12-11 10:06:13.283129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.806 [2024-12-11 10:06:13.283305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.806 [2024-12-11 10:06:13.283314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.806 [2024-12-11 10:06:13.283320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.806 [2024-12-11 10:06:13.283327] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.806 [2024-12-11 10:06:13.295359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.806 [2024-12-11 10:06:13.295772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.806 [2024-12-11 10:06:13.295789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.806 [2024-12-11 10:06:13.295796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.806 [2024-12-11 10:06:13.295968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.806 [2024-12-11 10:06:13.296142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.806 [2024-12-11 10:06:13.296151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.806 [2024-12-11 10:06:13.296157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.806 [2024-12-11 10:06:13.296164] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.806 [2024-12-11 10:06:13.308450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.806 [2024-12-11 10:06:13.308861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.806 [2024-12-11 10:06:13.308876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.806 [2024-12-11 10:06:13.308884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.806 [2024-12-11 10:06:13.309057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.806 [2024-12-11 10:06:13.309236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.806 [2024-12-11 10:06:13.309245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.806 [2024-12-11 10:06:13.309251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.806 [2024-12-11 10:06:13.309257] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.806 [2024-12-11 10:06:13.321399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.806 [2024-12-11 10:06:13.321817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.806 [2024-12-11 10:06:13.321834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.806 [2024-12-11 10:06:13.321841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.806 [2024-12-11 10:06:13.322014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.806 [2024-12-11 10:06:13.322187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.806 [2024-12-11 10:06:13.322195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.806 [2024-12-11 10:06:13.322201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.806 [2024-12-11 10:06:13.322208] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.806 [2024-12-11 10:06:13.334587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.806 [2024-12-11 10:06:13.334926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.806 [2024-12-11 10:06:13.334942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.806 [2024-12-11 10:06:13.334950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.806 [2024-12-11 10:06:13.335118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.806 [2024-12-11 10:06:13.335293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.806 [2024-12-11 10:06:13.335302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.806 [2024-12-11 10:06:13.335308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.806 [2024-12-11 10:06:13.335314] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.806 [2024-12-11 10:06:13.347323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.806 [2024-12-11 10:06:13.347658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.806 [2024-12-11 10:06:13.347694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.806 [2024-12-11 10:06:13.347728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.806 [2024-12-11 10:06:13.348259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.806 [2024-12-11 10:06:13.348428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.806 [2024-12-11 10:06:13.348437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.806 [2024-12-11 10:06:13.348443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.806 [2024-12-11 10:06:13.348449] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.806 [2024-12-11 10:06:13.360191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.806 [2024-12-11 10:06:13.360621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.806 [2024-12-11 10:06:13.360665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.806 [2024-12-11 10:06:13.360689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.806 [2024-12-11 10:06:13.361286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.806 [2024-12-11 10:06:13.361693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.806 [2024-12-11 10:06:13.361701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.806 [2024-12-11 10:06:13.361708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.806 [2024-12-11 10:06:13.361714] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.806 [2024-12-11 10:06:13.372987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.806 [2024-12-11 10:06:13.373397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.806 [2024-12-11 10:06:13.373414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:03.806 [2024-12-11 10:06:13.373422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:03.806 [2024-12-11 10:06:13.373610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:03.806 [2024-12-11 10:06:13.373787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.806 [2024-12-11 10:06:13.373799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.806 [2024-12-11 10:06:13.373807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.806 [2024-12-11 10:06:13.373814] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.066 [2024-12-11 10:06:13.386036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.066 [2024-12-11 10:06:13.386402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.067 [2024-12-11 10:06:13.386449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.067 [2024-12-11 10:06:13.386473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.067 [2024-12-11 10:06:13.387058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.067 [2024-12-11 10:06:13.387649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.067 [2024-12-11 10:06:13.387659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.067 [2024-12-11 10:06:13.387665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.067 [2024-12-11 10:06:13.387671] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.067 [2024-12-11 10:06:13.398799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.067 [2024-12-11 10:06:13.399232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.067 [2024-12-11 10:06:13.399278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.067 [2024-12-11 10:06:13.399302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.067 [2024-12-11 10:06:13.399770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.067 [2024-12-11 10:06:13.399939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.067 [2024-12-11 10:06:13.399947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.067 [2024-12-11 10:06:13.399953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.067 [2024-12-11 10:06:13.399959] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.067 [2024-12-11 10:06:13.411749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.067 [2024-12-11 10:06:13.412165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.067 [2024-12-11 10:06:13.412181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.067 [2024-12-11 10:06:13.412189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.067 [2024-12-11 10:06:13.412362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.067 [2024-12-11 10:06:13.412530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.067 [2024-12-11 10:06:13.412539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.067 [2024-12-11 10:06:13.412545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.067 [2024-12-11 10:06:13.412551] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.067 [2024-12-11 10:06:13.424527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.067 [2024-12-11 10:06:13.424948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.067 [2024-12-11 10:06:13.424992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.067 [2024-12-11 10:06:13.425016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.067 [2024-12-11 10:06:13.425612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.067 [2024-12-11 10:06:13.426001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.067 [2024-12-11 10:06:13.426009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.067 [2024-12-11 10:06:13.426015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.067 [2024-12-11 10:06:13.426025] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.067 [2024-12-11 10:06:13.437307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.067 [2024-12-11 10:06:13.437742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.067 [2024-12-11 10:06:13.437789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.067 [2024-12-11 10:06:13.437813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.067 [2024-12-11 10:06:13.438411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.067 [2024-12-11 10:06:13.438679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.067 [2024-12-11 10:06:13.438687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.067 [2024-12-11 10:06:13.438693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.067 [2024-12-11 10:06:13.438699] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.067 [2024-12-11 10:06:13.450042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.067 [2024-12-11 10:06:13.450422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.067 [2024-12-11 10:06:13.450439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.067 [2024-12-11 10:06:13.450446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.067 [2024-12-11 10:06:13.450614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.067 [2024-12-11 10:06:13.450782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.067 [2024-12-11 10:06:13.450790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.067 [2024-12-11 10:06:13.450796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.067 [2024-12-11 10:06:13.450801] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.067 [2024-12-11 10:06:13.462935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.067 [2024-12-11 10:06:13.463337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.067 [2024-12-11 10:06:13.463381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.067 [2024-12-11 10:06:13.463405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.067 [2024-12-11 10:06:13.463854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.067 [2024-12-11 10:06:13.464013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.067 [2024-12-11 10:06:13.464021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.067 [2024-12-11 10:06:13.464026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.067 [2024-12-11 10:06:13.464032] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.067 [2024-12-11 10:06:13.475778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.067 [2024-12-11 10:06:13.476169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.067 [2024-12-11 10:06:13.476184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.067 [2024-12-11 10:06:13.476190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.067 [2024-12-11 10:06:13.476378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.067 [2024-12-11 10:06:13.476548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.067 [2024-12-11 10:06:13.476556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.067 [2024-12-11 10:06:13.476562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.067 [2024-12-11 10:06:13.476568] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.067 [2024-12-11 10:06:13.488547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.067 [2024-12-11 10:06:13.488912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.067 [2024-12-11 10:06:13.488927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.067 [2024-12-11 10:06:13.488934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.067 [2024-12-11 10:06:13.489092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.067 [2024-12-11 10:06:13.489273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.067 [2024-12-11 10:06:13.489281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.067 [2024-12-11 10:06:13.489288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.067 [2024-12-11 10:06:13.489294] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.067 [2024-12-11 10:06:13.501303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.067 [2024-12-11 10:06:13.501687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.067 [2024-12-11 10:06:13.501702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.067 [2024-12-11 10:06:13.501709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.067 [2024-12-11 10:06:13.501867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.067 [2024-12-11 10:06:13.502026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.068 [2024-12-11 10:06:13.502033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.068 [2024-12-11 10:06:13.502039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.068 [2024-12-11 10:06:13.502044] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.068 [2024-12-11 10:06:13.514289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.068 [2024-12-11 10:06:13.514707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.068 [2024-12-11 10:06:13.514723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.068 [2024-12-11 10:06:13.514734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.068 [2024-12-11 10:06:13.514902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.068 [2024-12-11 10:06:13.515070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.068 [2024-12-11 10:06:13.515078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.068 [2024-12-11 10:06:13.515084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.068 [2024-12-11 10:06:13.515090] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.068 [2024-12-11 10:06:13.527091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.068 [2024-12-11 10:06:13.527521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.068 [2024-12-11 10:06:13.527567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.068 [2024-12-11 10:06:13.527591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.068 [2024-12-11 10:06:13.528019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.068 [2024-12-11 10:06:13.528203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.068 [2024-12-11 10:06:13.528210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.068 [2024-12-11 10:06:13.528223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.068 [2024-12-11 10:06:13.528229] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.068 [2024-12-11 10:06:13.539929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.068 [2024-12-11 10:06:13.540348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.068 [2024-12-11 10:06:13.540365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.068 [2024-12-11 10:06:13.540373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.068 [2024-12-11 10:06:13.540541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.068 [2024-12-11 10:06:13.540710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.068 [2024-12-11 10:06:13.540718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.068 [2024-12-11 10:06:13.540724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.068 [2024-12-11 10:06:13.540730] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.068 [2024-12-11 10:06:13.552666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.068 [2024-12-11 10:06:13.553099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.068 [2024-12-11 10:06:13.553143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.068 [2024-12-11 10:06:13.553167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.068 [2024-12-11 10:06:13.553764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.068 [2024-12-11 10:06:13.554279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.068 [2024-12-11 10:06:13.554288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.068 [2024-12-11 10:06:13.554295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.068 [2024-12-11 10:06:13.554301] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.068 [2024-12-11 10:06:13.565769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.068 [2024-12-11 10:06:13.566172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.068 [2024-12-11 10:06:13.566189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.068 [2024-12-11 10:06:13.566196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.068 [2024-12-11 10:06:13.566375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.068 [2024-12-11 10:06:13.566548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.068 [2024-12-11 10:06:13.566557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.068 [2024-12-11 10:06:13.566564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.068 [2024-12-11 10:06:13.566570] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.068 [2024-12-11 10:06:13.578728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.068 [2024-12-11 10:06:13.579125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.068 [2024-12-11 10:06:13.579141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.068 [2024-12-11 10:06:13.579148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.068 [2024-12-11 10:06:13.579341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.068 [2024-12-11 10:06:13.579515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.068 [2024-12-11 10:06:13.579523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.068 [2024-12-11 10:06:13.579529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.068 [2024-12-11 10:06:13.579535] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.068 [2024-12-11 10:06:13.591508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.068 [2024-12-11 10:06:13.591898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.068 [2024-12-11 10:06:13.591914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.068 [2024-12-11 10:06:13.591920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.068 [2024-12-11 10:06:13.592078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.068 [2024-12-11 10:06:13.592243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.068 [2024-12-11 10:06:13.592268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.068 [2024-12-11 10:06:13.592274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.068 [2024-12-11 10:06:13.592284] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.068 [2024-12-11 10:06:13.604348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.068 [2024-12-11 10:06:13.604734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.068 [2024-12-11 10:06:13.604749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.068 [2024-12-11 10:06:13.604756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.068 [2024-12-11 10:06:13.604914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.068 [2024-12-11 10:06:13.605073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.068 [2024-12-11 10:06:13.605081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.068 [2024-12-11 10:06:13.605087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.068 [2024-12-11 10:06:13.605093] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.068 [2024-12-11 10:06:13.617093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.068 [2024-12-11 10:06:13.617432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.068 [2024-12-11 10:06:13.617448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.068 [2024-12-11 10:06:13.617455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.068 [2024-12-11 10:06:13.617623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.068 [2024-12-11 10:06:13.617791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.068 [2024-12-11 10:06:13.617799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.068 [2024-12-11 10:06:13.617805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.068 [2024-12-11 10:06:13.617812] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.068 [2024-12-11 10:06:13.629901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.068 [2024-12-11 10:06:13.630323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.068 [2024-12-11 10:06:13.630369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.068 [2024-12-11 10:06:13.630392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.068 [2024-12-11 10:06:13.630976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.068 [2024-12-11 10:06:13.631434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.068 [2024-12-11 10:06:13.631443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.069 [2024-12-11 10:06:13.631449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.069 [2024-12-11 10:06:13.631455] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.329 [2024-12-11 10:06:13.642952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.329 [2024-12-11 10:06:13.643342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.329 [2024-12-11 10:06:13.643359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.329 [2024-12-11 10:06:13.643366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.329 [2024-12-11 10:06:13.643538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.329 [2024-12-11 10:06:13.643698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.329 [2024-12-11 10:06:13.643705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.329 [2024-12-11 10:06:13.643711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.329 [2024-12-11 10:06:13.643717] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.329 [2024-12-11 10:06:13.655713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.329 [2024-12-11 10:06:13.656126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.329 [2024-12-11 10:06:13.656143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.329 [2024-12-11 10:06:13.656150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.329 [2024-12-11 10:06:13.656343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.329 [2024-12-11 10:06:13.656518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.329 [2024-12-11 10:06:13.656526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.329 [2024-12-11 10:06:13.656533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.329 [2024-12-11 10:06:13.656539] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.329 [2024-12-11 10:06:13.668609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.329 [2024-12-11 10:06:13.669010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.329 [2024-12-11 10:06:13.669055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.329 [2024-12-11 10:06:13.669079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.329 [2024-12-11 10:06:13.669556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.329 [2024-12-11 10:06:13.669725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.329 [2024-12-11 10:06:13.669733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.329 [2024-12-11 10:06:13.669739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.329 [2024-12-11 10:06:13.669746] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.329 [2024-12-11 10:06:13.681354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.329 [2024-12-11 10:06:13.681774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.329 [2024-12-11 10:06:13.681790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.329 [2024-12-11 10:06:13.681800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.329 [2024-12-11 10:06:13.681960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.329 [2024-12-11 10:06:13.682119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.329 [2024-12-11 10:06:13.682127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.329 [2024-12-11 10:06:13.682132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.329 [2024-12-11 10:06:13.682138] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.329 [2024-12-11 10:06:13.694157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.329 [2024-12-11 10:06:13.694546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.329 [2024-12-11 10:06:13.694562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.329 [2024-12-11 10:06:13.694568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.329 [2024-12-11 10:06:13.694727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.329 [2024-12-11 10:06:13.694886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.329 [2024-12-11 10:06:13.694893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.329 [2024-12-11 10:06:13.694899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.329 [2024-12-11 10:06:13.694905] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.329 [2024-12-11 10:06:13.706921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.329 [2024-12-11 10:06:13.707296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.329 [2024-12-11 10:06:13.707313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.329 [2024-12-11 10:06:13.707320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.329 [2024-12-11 10:06:13.707487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.329 [2024-12-11 10:06:13.707655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.329 [2024-12-11 10:06:13.707663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.329 [2024-12-11 10:06:13.707670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.329 [2024-12-11 10:06:13.707675] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.329 [2024-12-11 10:06:13.719762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.329 [2024-12-11 10:06:13.720176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.329 [2024-12-11 10:06:13.720192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.329 [2024-12-11 10:06:13.720199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.329 [2024-12-11 10:06:13.720371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.329 [2024-12-11 10:06:13.720540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.329 [2024-12-11 10:06:13.720553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.329 [2024-12-11 10:06:13.720559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.329 [2024-12-11 10:06:13.720565] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.329 [2024-12-11 10:06:13.732517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.329 [2024-12-11 10:06:13.732864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.329 [2024-12-11 10:06:13.732880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.329 [2024-12-11 10:06:13.732887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.329 [2024-12-11 10:06:13.733054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.329 [2024-12-11 10:06:13.733227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.329 [2024-12-11 10:06:13.733236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.329 [2024-12-11 10:06:13.733242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.329 [2024-12-11 10:06:13.733265] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.329 [2024-12-11 10:06:13.745322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.329 [2024-12-11 10:06:13.745711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.329 [2024-12-11 10:06:13.745726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.330 [2024-12-11 10:06:13.745733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.330 [2024-12-11 10:06:13.745892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.330 [2024-12-11 10:06:13.746051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.330 [2024-12-11 10:06:13.746059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.330 [2024-12-11 10:06:13.746065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.330 [2024-12-11 10:06:13.746071] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.330 [2024-12-11 10:06:13.758053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.330 [2024-12-11 10:06:13.758473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.330 [2024-12-11 10:06:13.758490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.330 [2024-12-11 10:06:13.758497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.330 [2024-12-11 10:06:13.758665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.330 [2024-12-11 10:06:13.758833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.330 [2024-12-11 10:06:13.758842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.330 [2024-12-11 10:06:13.758848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.330 [2024-12-11 10:06:13.758858] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.330 [2024-12-11 10:06:13.770853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.330 [2024-12-11 10:06:13.771268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.330 [2024-12-11 10:06:13.771285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.330 [2024-12-11 10:06:13.771292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.330 [2024-12-11 10:06:13.771459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.330 [2024-12-11 10:06:13.771631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.330 [2024-12-11 10:06:13.771639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.330 [2024-12-11 10:06:13.771645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.330 [2024-12-11 10:06:13.771651] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.330 [2024-12-11 10:06:13.783750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.330 [2024-12-11 10:06:13.784193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.330 [2024-12-11 10:06:13.784269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.330 [2024-12-11 10:06:13.784295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.330 [2024-12-11 10:06:13.784806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.330 [2024-12-11 10:06:13.785196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.330 [2024-12-11 10:06:13.785214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.330 [2024-12-11 10:06:13.785237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.330 [2024-12-11 10:06:13.785250] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.330 [2024-12-11 10:06:13.798899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.330 [2024-12-11 10:06:13.799392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.330 [2024-12-11 10:06:13.799415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.330 [2024-12-11 10:06:13.799425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.330 [2024-12-11 10:06:13.799679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.330 [2024-12-11 10:06:13.799934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.330 [2024-12-11 10:06:13.799945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.330 [2024-12-11 10:06:13.799954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.330 [2024-12-11 10:06:13.799963] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.330 [2024-12-11 10:06:13.811799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.330 [2024-12-11 10:06:13.812224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.330 [2024-12-11 10:06:13.812241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.330 [2024-12-11 10:06:13.812249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.330 [2024-12-11 10:06:13.812422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.330 [2024-12-11 10:06:13.812595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.330 [2024-12-11 10:06:13.812603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.330 [2024-12-11 10:06:13.812610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.330 [2024-12-11 10:06:13.812616] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.330 [2024-12-11 10:06:13.824877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.330 [2024-12-11 10:06:13.825287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.330 [2024-12-11 10:06:13.825304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.330 [2024-12-11 10:06:13.825312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.330 [2024-12-11 10:06:13.825486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.330 [2024-12-11 10:06:13.825658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.330 [2024-12-11 10:06:13.825666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.330 [2024-12-11 10:06:13.825673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.330 [2024-12-11 10:06:13.825679] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.330 [2024-12-11 10:06:13.837892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.330 [2024-12-11 10:06:13.838290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.330 [2024-12-11 10:06:13.838308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.330 [2024-12-11 10:06:13.838315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.330 [2024-12-11 10:06:13.838484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.330 [2024-12-11 10:06:13.838658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.330 [2024-12-11 10:06:13.838666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.330 [2024-12-11 10:06:13.838672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.330 [2024-12-11 10:06:13.838678] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.330 [2024-12-11 10:06:13.850687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.330 [2024-12-11 10:06:13.851100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.330 [2024-12-11 10:06:13.851116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.330 [2024-12-11 10:06:13.851125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.330 [2024-12-11 10:06:13.851309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.330 [2024-12-11 10:06:13.851477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.330 [2024-12-11 10:06:13.851485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.330 [2024-12-11 10:06:13.851491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.330 [2024-12-11 10:06:13.851497] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.330 [2024-12-11 10:06:13.863461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.330 [2024-12-11 10:06:13.863871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.330 [2024-12-11 10:06:13.863887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.330 [2024-12-11 10:06:13.863894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.330 [2024-12-11 10:06:13.864052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.330 [2024-12-11 10:06:13.864211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.330 [2024-12-11 10:06:13.864225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.330 [2024-12-11 10:06:13.864231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.330 [2024-12-11 10:06:13.864237] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.330 [2024-12-11 10:06:13.876231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.330 [2024-12-11 10:06:13.876627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.330 [2024-12-11 10:06:13.876643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.330 [2024-12-11 10:06:13.876650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.330 [2024-12-11 10:06:13.876808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.331 [2024-12-11 10:06:13.876966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.331 [2024-12-11 10:06:13.876974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.331 [2024-12-11 10:06:13.876979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.331 [2024-12-11 10:06:13.876985] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.331 [2024-12-11 10:06:13.889084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.331 [2024-12-11 10:06:13.889480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.331 [2024-12-11 10:06:13.889496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.331 [2024-12-11 10:06:13.889504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.331 [2024-12-11 10:06:13.889672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.331 [2024-12-11 10:06:13.889839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.331 [2024-12-11 10:06:13.889850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.331 [2024-12-11 10:06:13.889857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.331 [2024-12-11 10:06:13.889863] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.591 [2024-12-11 10:06:13.902136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.591 [2024-12-11 10:06:13.902557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.591 [2024-12-11 10:06:13.902576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.591 [2024-12-11 10:06:13.902584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.591 [2024-12-11 10:06:13.902757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.591 [2024-12-11 10:06:13.902930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.591 [2024-12-11 10:06:13.902939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.591 [2024-12-11 10:06:13.902945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.591 [2024-12-11 10:06:13.902951] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.591 [2024-12-11 10:06:13.914874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.591 [2024-12-11 10:06:13.915294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.591 [2024-12-11 10:06:13.915312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.591 [2024-12-11 10:06:13.915319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.591 [2024-12-11 10:06:13.915488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.591 [2024-12-11 10:06:13.915656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.591 [2024-12-11 10:06:13.915664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.591 [2024-12-11 10:06:13.915670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.591 [2024-12-11 10:06:13.915676] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.591 [2024-12-11 10:06:13.927696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.591 [2024-12-11 10:06:13.928109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.591 [2024-12-11 10:06:13.928152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.591 [2024-12-11 10:06:13.928175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.591 [2024-12-11 10:06:13.928773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.591 [2024-12-11 10:06:13.929181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.591 [2024-12-11 10:06:13.929189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.591 [2024-12-11 10:06:13.929195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.591 [2024-12-11 10:06:13.929205] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.591 [2024-12-11 10:06:13.940487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.591 [2024-12-11 10:06:13.940905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.591 [2024-12-11 10:06:13.940921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.591 [2024-12-11 10:06:13.940928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.591 [2024-12-11 10:06:13.941097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.591 [2024-12-11 10:06:13.941271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.591 [2024-12-11 10:06:13.941280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.591 [2024-12-11 10:06:13.941286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.591 [2024-12-11 10:06:13.941292] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.591 5855.00 IOPS, 22.87 MiB/s [2024-12-11T09:06:14.166Z] [2024-12-11 10:06:13.953237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.591 [2024-12-11 10:06:13.953634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.591 [2024-12-11 10:06:13.953679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.591 [2024-12-11 10:06:13.953703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.591 [2024-12-11 10:06:13.954160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.591 [2024-12-11 10:06:13.954347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.591 [2024-12-11 10:06:13.954356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.591 [2024-12-11 10:06:13.954362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.591 [2024-12-11 10:06:13.954368] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.591 [2024-12-11 10:06:13.966044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.591 [2024-12-11 10:06:13.966456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.591 [2024-12-11 10:06:13.966473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.591 [2024-12-11 10:06:13.966480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.591 [2024-12-11 10:06:13.966648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.591 [2024-12-11 10:06:13.966816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.591 [2024-12-11 10:06:13.966824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.591 [2024-12-11 10:06:13.966830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.591 [2024-12-11 10:06:13.966836] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.591 [2024-12-11 10:06:13.978926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.591 [2024-12-11 10:06:13.979349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.591 [2024-12-11 10:06:13.979366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.591 [2024-12-11 10:06:13.979373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.591 [2024-12-11 10:06:13.979540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.591 [2024-12-11 10:06:13.979708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.591 [2024-12-11 10:06:13.979716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.591 [2024-12-11 10:06:13.979722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.591 [2024-12-11 10:06:13.979728] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.591 [2024-12-11 10:06:13.991728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.591 [2024-12-11 10:06:13.992121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.591 [2024-12-11 10:06:13.992136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.591 [2024-12-11 10:06:13.992143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.591 [2024-12-11 10:06:13.992329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.591 [2024-12-11 10:06:13.992498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.591 [2024-12-11 10:06:13.992506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.591 [2024-12-11 10:06:13.992512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.591 [2024-12-11 10:06:13.992518] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.591 [2024-12-11 10:06:14.004514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.591 [2024-12-11 10:06:14.004938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.591 [2024-12-11 10:06:14.004982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.591 [2024-12-11 10:06:14.005005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.591 [2024-12-11 10:06:14.005603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.591 [2024-12-11 10:06:14.006189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.592 [2024-12-11 10:06:14.006214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.592 [2024-12-11 10:06:14.006253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.592 [2024-12-11 10:06:14.006267] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.592 [2024-12-11 10:06:14.019639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.592 [2024-12-11 10:06:14.020123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.592 [2024-12-11 10:06:14.020145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.592 [2024-12-11 10:06:14.020159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.592 [2024-12-11 10:06:14.020421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.592 [2024-12-11 10:06:14.020677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.592 [2024-12-11 10:06:14.020689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.592 [2024-12-11 10:06:14.020698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.592 [2024-12-11 10:06:14.020707] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.592 [2024-12-11 10:06:14.032560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.592 [2024-12-11 10:06:14.032961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.592 [2024-12-11 10:06:14.032978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.592 [2024-12-11 10:06:14.032985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.592 [2024-12-11 10:06:14.033169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.592 [2024-12-11 10:06:14.033349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.592 [2024-12-11 10:06:14.033358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.592 [2024-12-11 10:06:14.033364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.592 [2024-12-11 10:06:14.033370] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.592 [2024-12-11 10:06:14.045319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.592 [2024-12-11 10:06:14.045711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.592 [2024-12-11 10:06:14.045727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.592 [2024-12-11 10:06:14.045734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.592 [2024-12-11 10:06:14.045892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.592 [2024-12-11 10:06:14.046051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.592 [2024-12-11 10:06:14.046058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.592 [2024-12-11 10:06:14.046064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.592 [2024-12-11 10:06:14.046070] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.592 [2024-12-11 10:06:14.058200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.592 [2024-12-11 10:06:14.058592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.592 [2024-12-11 10:06:14.058609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.592 [2024-12-11 10:06:14.058615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.592 [2024-12-11 10:06:14.058774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.592 [2024-12-11 10:06:14.058936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.592 [2024-12-11 10:06:14.058944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.592 [2024-12-11 10:06:14.058949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.592 [2024-12-11 10:06:14.058955] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.592 [2024-12-11 10:06:14.070997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.592 [2024-12-11 10:06:14.071417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.592 [2024-12-11 10:06:14.071435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.592 [2024-12-11 10:06:14.071443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.592 [2024-12-11 10:06:14.071615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.592 [2024-12-11 10:06:14.071789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.592 [2024-12-11 10:06:14.071797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.592 [2024-12-11 10:06:14.071804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.592 [2024-12-11 10:06:14.071810] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.592 [2024-12-11 10:06:14.084088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.592 [2024-12-11 10:06:14.084473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.592 [2024-12-11 10:06:14.084490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.592 [2024-12-11 10:06:14.084497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.592 [2024-12-11 10:06:14.084670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.592 [2024-12-11 10:06:14.084847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.592 [2024-12-11 10:06:14.084856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.592 [2024-12-11 10:06:14.084862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.592 [2024-12-11 10:06:14.084868] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.592 [2024-12-11 10:06:14.097065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.592 [2024-12-11 10:06:14.097457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.592 [2024-12-11 10:06:14.097474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.592 [2024-12-11 10:06:14.097481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.592 [2024-12-11 10:06:14.097654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.592 [2024-12-11 10:06:14.097827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.592 [2024-12-11 10:06:14.097836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.592 [2024-12-11 10:06:14.097845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.592 [2024-12-11 10:06:14.097852] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.592 [2024-12-11 10:06:14.109854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.592 [2024-12-11 10:06:14.110279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.592 [2024-12-11 10:06:14.110296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.592 [2024-12-11 10:06:14.110303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.592 [2024-12-11 10:06:14.110472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.592 [2024-12-11 10:06:14.110640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.592 [2024-12-11 10:06:14.110648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.592 [2024-12-11 10:06:14.110655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.592 [2024-12-11 10:06:14.110661] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.592 [2024-12-11 10:06:14.122674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.592 [2024-12-11 10:06:14.123101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.592 [2024-12-11 10:06:14.123118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.592 [2024-12-11 10:06:14.123125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.592 [2024-12-11 10:06:14.123300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.592 [2024-12-11 10:06:14.123469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.592 [2024-12-11 10:06:14.123477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.592 [2024-12-11 10:06:14.123483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.592 [2024-12-11 10:06:14.123489] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.592 [2024-12-11 10:06:14.135604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.592 [2024-12-11 10:06:14.135940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.592 [2024-12-11 10:06:14.135956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.592 [2024-12-11 10:06:14.135963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.593 [2024-12-11 10:06:14.136122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.593 [2024-12-11 10:06:14.136306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.593 [2024-12-11 10:06:14.136315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.593 [2024-12-11 10:06:14.136321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.593 [2024-12-11 10:06:14.136327] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.593 [2024-12-11 10:06:14.148477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.593 [2024-12-11 10:06:14.148925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.593 [2024-12-11 10:06:14.148942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.593 [2024-12-11 10:06:14.148949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.593 [2024-12-11 10:06:14.149122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.593 [2024-12-11 10:06:14.149303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.593 [2024-12-11 10:06:14.149312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.593 [2024-12-11 10:06:14.149319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.593 [2024-12-11 10:06:14.149325] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.593 [2024-12-11 10:06:14.161534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.593 [2024-12-11 10:06:14.161966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.593 [2024-12-11 10:06:14.161984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.593 [2024-12-11 10:06:14.161992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.593 [2024-12-11 10:06:14.162166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.593 [2024-12-11 10:06:14.162348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.593 [2024-12-11 10:06:14.162357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.593 [2024-12-11 10:06:14.162364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.593 [2024-12-11 10:06:14.162370] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.853 [2024-12-11 10:06:14.174477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.853 [2024-12-11 10:06:14.174886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.853 [2024-12-11 10:06:14.174902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.853 [2024-12-11 10:06:14.174909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.853 [2024-12-11 10:06:14.175078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.853 [2024-12-11 10:06:14.175251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.853 [2024-12-11 10:06:14.175261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.853 [2024-12-11 10:06:14.175267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.853 [2024-12-11 10:06:14.175273] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.853 [2024-12-11 10:06:14.187356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.853 [2024-12-11 10:06:14.187684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.853 [2024-12-11 10:06:14.187701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.853 [2024-12-11 10:06:14.187712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.853 [2024-12-11 10:06:14.187880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.853 [2024-12-11 10:06:14.188048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.853 [2024-12-11 10:06:14.188056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.853 [2024-12-11 10:06:14.188062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.853 [2024-12-11 10:06:14.188068] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.853 [2024-12-11 10:06:14.200192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.853 [2024-12-11 10:06:14.200532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.853 [2024-12-11 10:06:14.200549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.853 [2024-12-11 10:06:14.200556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.853 [2024-12-11 10:06:14.200723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.853 [2024-12-11 10:06:14.200890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.853 [2024-12-11 10:06:14.200898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.853 [2024-12-11 10:06:14.200904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.853 [2024-12-11 10:06:14.200910] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.853 [2024-12-11 10:06:14.213077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.853 [2024-12-11 10:06:14.213424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.853 [2024-12-11 10:06:14.213469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.853 [2024-12-11 10:06:14.213492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.853 [2024-12-11 10:06:14.214073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.853 [2024-12-11 10:06:14.214659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.853 [2024-12-11 10:06:14.214668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.853 [2024-12-11 10:06:14.214674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.853 [2024-12-11 10:06:14.214680] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.853 [2024-12-11 10:06:14.226015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.853 [2024-12-11 10:06:14.226362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.853 [2024-12-11 10:06:14.226379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.853 [2024-12-11 10:06:14.226386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.853 [2024-12-11 10:06:14.226555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.853 [2024-12-11 10:06:14.226725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.853 [2024-12-11 10:06:14.226733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.853 [2024-12-11 10:06:14.226739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.853 [2024-12-11 10:06:14.226746] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.853 [2024-12-11 10:06:14.238893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.853 [2024-12-11 10:06:14.239181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.853 [2024-12-11 10:06:14.239197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.853 [2024-12-11 10:06:14.239204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.853 [2024-12-11 10:06:14.239379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.853 [2024-12-11 10:06:14.239547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.853 [2024-12-11 10:06:14.239555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.853 [2024-12-11 10:06:14.239561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.853 [2024-12-11 10:06:14.239567] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.853 [2024-12-11 10:06:14.251804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.853 [2024-12-11 10:06:14.252252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.853 [2024-12-11 10:06:14.252298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.853 [2024-12-11 10:06:14.252322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.853 [2024-12-11 10:06:14.252820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.853 [2024-12-11 10:06:14.252988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.853 [2024-12-11 10:06:14.252996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.853 [2024-12-11 10:06:14.253003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.853 [2024-12-11 10:06:14.253009] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.853 [2024-12-11 10:06:14.264651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.853 [2024-12-11 10:06:14.265010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.853 [2024-12-11 10:06:14.265026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.853 [2024-12-11 10:06:14.265033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.853 [2024-12-11 10:06:14.265201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.853 [2024-12-11 10:06:14.265377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.853 [2024-12-11 10:06:14.265386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.853 [2024-12-11 10:06:14.265395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.853 [2024-12-11 10:06:14.265402] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.853 [2024-12-11 10:06:14.277499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.853 [2024-12-11 10:06:14.277794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.853 [2024-12-11 10:06:14.277810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.853 [2024-12-11 10:06:14.277817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.853 [2024-12-11 10:06:14.277985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.854 [2024-12-11 10:06:14.278152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.854 [2024-12-11 10:06:14.278161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.854 [2024-12-11 10:06:14.278168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.854 [2024-12-11 10:06:14.278174] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.854 [2024-12-11 10:06:14.290444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.854 [2024-12-11 10:06:14.290787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.854 [2024-12-11 10:06:14.290803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.854 [2024-12-11 10:06:14.290811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.854 [2024-12-11 10:06:14.290979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.854 [2024-12-11 10:06:14.291147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.854 [2024-12-11 10:06:14.291156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.854 [2024-12-11 10:06:14.291162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.854 [2024-12-11 10:06:14.291169] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.854 [2024-12-11 10:06:14.303484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.854 [2024-12-11 10:06:14.303838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.854 [2024-12-11 10:06:14.303854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.854 [2024-12-11 10:06:14.303861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.854 [2024-12-11 10:06:14.304034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.854 [2024-12-11 10:06:14.304207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.854 [2024-12-11 10:06:14.304216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.854 [2024-12-11 10:06:14.304231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.854 [2024-12-11 10:06:14.304238] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.854 [2024-12-11 10:06:14.316460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.854 [2024-12-11 10:06:14.316824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.854 [2024-12-11 10:06:14.316840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.854 [2024-12-11 10:06:14.316847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.854 [2024-12-11 10:06:14.317019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.854 [2024-12-11 10:06:14.317193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.854 [2024-12-11 10:06:14.317201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.854 [2024-12-11 10:06:14.317207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.854 [2024-12-11 10:06:14.317214] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.854 [2024-12-11 10:06:14.329416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.854 [2024-12-11 10:06:14.329757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.854 [2024-12-11 10:06:14.329774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.854 [2024-12-11 10:06:14.329782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.854 [2024-12-11 10:06:14.329955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.854 [2024-12-11 10:06:14.330128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.854 [2024-12-11 10:06:14.330137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.854 [2024-12-11 10:06:14.330144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.854 [2024-12-11 10:06:14.330150] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.854 [2024-12-11 10:06:14.342396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.854 [2024-12-11 10:06:14.342754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.854 [2024-12-11 10:06:14.342772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.854 [2024-12-11 10:06:14.342780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.854 [2024-12-11 10:06:14.342954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.854 [2024-12-11 10:06:14.343126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.854 [2024-12-11 10:06:14.343135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.854 [2024-12-11 10:06:14.343142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.854 [2024-12-11 10:06:14.343149] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.854 [2024-12-11 10:06:14.355517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.854 [2024-12-11 10:06:14.355865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.854 [2024-12-11 10:06:14.355881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.854 [2024-12-11 10:06:14.355894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.854 [2024-12-11 10:06:14.356068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.854 [2024-12-11 10:06:14.356247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.854 [2024-12-11 10:06:14.356256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.854 [2024-12-11 10:06:14.356263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.854 [2024-12-11 10:06:14.356269] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.854 [2024-12-11 10:06:14.368583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.854 [2024-12-11 10:06:14.368984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.854 [2024-12-11 10:06:14.369002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.854 [2024-12-11 10:06:14.369009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.854 [2024-12-11 10:06:14.369178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.854 [2024-12-11 10:06:14.369351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.854 [2024-12-11 10:06:14.369360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.854 [2024-12-11 10:06:14.369366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.854 [2024-12-11 10:06:14.369372] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.854 [2024-12-11 10:06:14.381679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.854 [2024-12-11 10:06:14.382013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.854 [2024-12-11 10:06:14.382046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.854 [2024-12-11 10:06:14.382072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.854 [2024-12-11 10:06:14.382669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.854 [2024-12-11 10:06:14.383181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.854 [2024-12-11 10:06:14.383190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.854 [2024-12-11 10:06:14.383196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.854 [2024-12-11 10:06:14.383202] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.854 [2024-12-11 10:06:14.394775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.854 [2024-12-11 10:06:14.395091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.854 [2024-12-11 10:06:14.395108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.854 [2024-12-11 10:06:14.395115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.854 [2024-12-11 10:06:14.395289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.854 [2024-12-11 10:06:14.395461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.854 [2024-12-11 10:06:14.395470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.854 [2024-12-11 10:06:14.395476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.854 [2024-12-11 10:06:14.395482] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.854 [2024-12-11 10:06:14.407655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.854 [2024-12-11 10:06:14.408014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.854 [2024-12-11 10:06:14.408031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.854 [2024-12-11 10:06:14.408038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.854 [2024-12-11 10:06:14.408205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.854 [2024-12-11 10:06:14.408387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.854 [2024-12-11 10:06:14.408395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.855 [2024-12-11 10:06:14.408402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.855 [2024-12-11 10:06:14.408407] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.855 [2024-12-11 10:06:14.420446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.855 [2024-12-11 10:06:14.420724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.855 [2024-12-11 10:06:14.420741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:04.855 [2024-12-11 10:06:14.420748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:04.855 [2024-12-11 10:06:14.420916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:04.855 [2024-12-11 10:06:14.421104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.855 [2024-12-11 10:06:14.421116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.855 [2024-12-11 10:06:14.421124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.855 [2024-12-11 10:06:14.421132] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.115 [2024-12-11 10:06:14.433535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.115 [2024-12-11 10:06:14.433918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.115 [2024-12-11 10:06:14.433935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.115 [2024-12-11 10:06:14.433942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.115 [2024-12-11 10:06:14.434115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.115 [2024-12-11 10:06:14.434296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.115 [2024-12-11 10:06:14.434305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.115 [2024-12-11 10:06:14.434311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.115 [2024-12-11 10:06:14.434321] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.115 [2024-12-11 10:06:14.446427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.115 [2024-12-11 10:06:14.446730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.115 [2024-12-11 10:06:14.446747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.115 [2024-12-11 10:06:14.446755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.115 [2024-12-11 10:06:14.446927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.115 [2024-12-11 10:06:14.447101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.115 [2024-12-11 10:06:14.447109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.115 [2024-12-11 10:06:14.447116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.115 [2024-12-11 10:06:14.447122] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.115 [2024-12-11 10:06:14.459338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.115 [2024-12-11 10:06:14.459618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.115 [2024-12-11 10:06:14.459634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.115 [2024-12-11 10:06:14.459641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.115 [2024-12-11 10:06:14.459809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.115 [2024-12-11 10:06:14.459977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.115 [2024-12-11 10:06:14.459985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.115 [2024-12-11 10:06:14.459991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.115 [2024-12-11 10:06:14.459998] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.115 [2024-12-11 10:06:14.472171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.115 [2024-12-11 10:06:14.472530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.115 [2024-12-11 10:06:14.472546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.115 [2024-12-11 10:06:14.472553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.115 [2024-12-11 10:06:14.472721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.115 [2024-12-11 10:06:14.472890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.115 [2024-12-11 10:06:14.472898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.115 [2024-12-11 10:06:14.472905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.115 [2024-12-11 10:06:14.472911] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.115 [2024-12-11 10:06:14.484978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.115 [2024-12-11 10:06:14.485338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.115 [2024-12-11 10:06:14.485355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.115 [2024-12-11 10:06:14.485362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.115 [2024-12-11 10:06:14.485530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.115 [2024-12-11 10:06:14.485698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.115 [2024-12-11 10:06:14.485707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.115 [2024-12-11 10:06:14.485713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.115 [2024-12-11 10:06:14.485719] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 243072 Killed "${NVMF_APP[@]}" "$@" 00:28:05.115 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:05.115 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:05.115 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:05.115 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:05.115 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:05.115 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=244835 00:28:05.115 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 244835 00:28:05.115 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:05.115 [2024-12-11 10:06:14.498096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.115 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 244835 ']' 00:28:05.115 [2024-12-11 10:06:14.498384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.115 [2024-12-11 10:06:14.498402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.115 [2024-12-11 10:06:14.498409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.115 [2024-12-11 10:06:14.498581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.115 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.115 [2024-12-11 10:06:14.498754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.115 [2024-12-11 10:06:14.498763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.115 [2024-12-11 10:06:14.498769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.115 [2024-12-11 10:06:14.498776] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.115 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:05.115 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.115 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:05.116 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:05.116 [2024-12-11 10:06:14.511177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.116 [2024-12-11 10:06:14.511546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.116 [2024-12-11 10:06:14.511562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.116 [2024-12-11 10:06:14.511569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.116 [2024-12-11 10:06:14.511743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.116 [2024-12-11 10:06:14.511915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.116 [2024-12-11 10:06:14.511923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.116 [2024-12-11 10:06:14.511929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.116 [2024-12-11 10:06:14.511935] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.116 [2024-12-11 10:06:14.524159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.116 [2024-12-11 10:06:14.524495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.116 [2024-12-11 10:06:14.524512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.116 [2024-12-11 10:06:14.524519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.116 [2024-12-11 10:06:14.524692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.116 [2024-12-11 10:06:14.524865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.116 [2024-12-11 10:06:14.524874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.116 [2024-12-11 10:06:14.524880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.116 [2024-12-11 10:06:14.524886] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.116 [2024-12-11 10:06:14.537253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.116 [2024-12-11 10:06:14.537658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.116 [2024-12-11 10:06:14.537674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.116 [2024-12-11 10:06:14.537681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.116 [2024-12-11 10:06:14.537855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.116 [2024-12-11 10:06:14.538029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.116 [2024-12-11 10:06:14.538037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.116 [2024-12-11 10:06:14.538044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.116 [2024-12-11 10:06:14.538050] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.116 [2024-12-11 10:06:14.545564] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:28:05.116 [2024-12-11 10:06:14.545602] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.116 [2024-12-11 10:06:14.550261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.116 [2024-12-11 10:06:14.550694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.116 [2024-12-11 10:06:14.550710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.116 [2024-12-11 10:06:14.550718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.116 [2024-12-11 10:06:14.550891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.116 [2024-12-11 10:06:14.551065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.116 [2024-12-11 10:06:14.551074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.116 [2024-12-11 10:06:14.551080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.116 [2024-12-11 10:06:14.551087] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.116 [2024-12-11 10:06:14.563245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.116 [2024-12-11 10:06:14.563680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.116 [2024-12-11 10:06:14.563697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.116 [2024-12-11 10:06:14.563704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.116 [2024-12-11 10:06:14.563877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.116 [2024-12-11 10:06:14.564051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.116 [2024-12-11 10:06:14.564059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.116 [2024-12-11 10:06:14.564066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.116 [2024-12-11 10:06:14.564072] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.116 [2024-12-11 10:06:14.576205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.116 [2024-12-11 10:06:14.576650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.116 [2024-12-11 10:06:14.576667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.116 [2024-12-11 10:06:14.576674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.116 [2024-12-11 10:06:14.576847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.116 [2024-12-11 10:06:14.577021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.116 [2024-12-11 10:06:14.577029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.116 [2024-12-11 10:06:14.577035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.116 [2024-12-11 10:06:14.577042] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.116 [2024-12-11 10:06:14.589270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.116 [2024-12-11 10:06:14.589696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.116 [2024-12-11 10:06:14.589716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.116 [2024-12-11 10:06:14.589724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.116 [2024-12-11 10:06:14.589897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.116 [2024-12-11 10:06:14.590070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.116 [2024-12-11 10:06:14.590079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.116 [2024-12-11 10:06:14.590086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.116 [2024-12-11 10:06:14.590093] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.116 [2024-12-11 10:06:14.602310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.116 [2024-12-11 10:06:14.602744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.116 [2024-12-11 10:06:14.602761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.116 [2024-12-11 10:06:14.602769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.116 [2024-12-11 10:06:14.602942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.116 [2024-12-11 10:06:14.603115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.116 [2024-12-11 10:06:14.603123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.116 [2024-12-11 10:06:14.603130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.116 [2024-12-11 10:06:14.603136] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.116 [2024-12-11 10:06:14.615345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.116 [2024-12-11 10:06:14.615777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.116 [2024-12-11 10:06:14.615793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.116 [2024-12-11 10:06:14.615800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.116 [2024-12-11 10:06:14.615974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.116 [2024-12-11 10:06:14.616147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.116 [2024-12-11 10:06:14.616155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.116 [2024-12-11 10:06:14.616162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.116 [2024-12-11 10:06:14.616168] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.116 [2024-12-11 10:06:14.628337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.116 [2024-12-11 10:06:14.628786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.116 [2024-12-11 10:06:14.628803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.116 [2024-12-11 10:06:14.628810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.116 [2024-12-11 10:06:14.628987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.116 [2024-12-11 10:06:14.629160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.116 [2024-12-11 10:06:14.629169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.116 [2024-12-11 10:06:14.629175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.117 [2024-12-11 10:06:14.629181] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.117 [2024-12-11 10:06:14.629623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:05.117 [2024-12-11 10:06:14.641434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.117 [2024-12-11 10:06:14.641884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.117 [2024-12-11 10:06:14.641903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.117 [2024-12-11 10:06:14.641912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.117 [2024-12-11 10:06:14.642086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.117 [2024-12-11 10:06:14.642266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.117 [2024-12-11 10:06:14.642275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.117 [2024-12-11 10:06:14.642282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.117 [2024-12-11 10:06:14.642289] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.117 [2024-12-11 10:06:14.654347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.117 [2024-12-11 10:06:14.654776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.117 [2024-12-11 10:06:14.654794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.117 [2024-12-11 10:06:14.654801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.117 [2024-12-11 10:06:14.654974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.117 [2024-12-11 10:06:14.655147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.117 [2024-12-11 10:06:14.655156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.117 [2024-12-11 10:06:14.655162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.117 [2024-12-11 10:06:14.655169] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.117 [2024-12-11 10:06:14.667411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.117 [2024-12-11 10:06:14.667862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.117 [2024-12-11 10:06:14.667879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.117 [2024-12-11 10:06:14.667886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.117 [2024-12-11 10:06:14.668059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.117 [2024-12-11 10:06:14.668239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.117 [2024-12-11 10:06:14.668252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.117 [2024-12-11 10:06:14.668259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.117 [2024-12-11 10:06:14.668265] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.117 [2024-12-11 10:06:14.670277] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.117 [2024-12-11 10:06:14.670302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.117 [2024-12-11 10:06:14.670309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.117 [2024-12-11 10:06:14.670315] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.117 [2024-12-11 10:06:14.670320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.117 [2024-12-11 10:06:14.671594] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:05.117 [2024-12-11 10:06:14.671702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.117 [2024-12-11 10:06:14.671703] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:05.117 [2024-12-11 10:06:14.680470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.117 [2024-12-11 10:06:14.680923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.117 [2024-12-11 10:06:14.680945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.117 [2024-12-11 10:06:14.680954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.117 [2024-12-11 10:06:14.681128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.117 [2024-12-11 10:06:14.681344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.117 [2024-12-11 10:06:14.681353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.117 [2024-12-11 10:06:14.681361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.117 [2024-12-11 10:06:14.681369] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.377 [2024-12-11 10:06:14.693549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.377 [2024-12-11 10:06:14.694015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.377 [2024-12-11 10:06:14.694037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.377 [2024-12-11 10:06:14.694050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.377 [2024-12-11 10:06:14.694236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.377 [2024-12-11 10:06:14.694413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.377 [2024-12-11 10:06:14.694422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.377 [2024-12-11 10:06:14.694429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.377 [2024-12-11 10:06:14.694437] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.377 [2024-12-11 10:06:14.706675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.377 [2024-12-11 10:06:14.707033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.377 [2024-12-11 10:06:14.707055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.377 [2024-12-11 10:06:14.707064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.377 [2024-12-11 10:06:14.707243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.377 [2024-12-11 10:06:14.707418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.377 [2024-12-11 10:06:14.707426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.377 [2024-12-11 10:06:14.707434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.377 [2024-12-11 10:06:14.707442] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.377 [2024-12-11 10:06:14.719677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.377 [2024-12-11 10:06:14.720129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.377 [2024-12-11 10:06:14.720150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.377 [2024-12-11 10:06:14.720159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.377 [2024-12-11 10:06:14.720338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.377 [2024-12-11 10:06:14.720513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.377 [2024-12-11 10:06:14.720522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.377 [2024-12-11 10:06:14.720529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.377 [2024-12-11 10:06:14.720537] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.377 [2024-12-11 10:06:14.732775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.377 [2024-12-11 10:06:14.733230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.377 [2024-12-11 10:06:14.733251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.377 [2024-12-11 10:06:14.733260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.377 [2024-12-11 10:06:14.733434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.377 [2024-12-11 10:06:14.733609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.377 [2024-12-11 10:06:14.733618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.377 [2024-12-11 10:06:14.733625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.377 [2024-12-11 10:06:14.733632] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.377 [2024-12-11 10:06:14.745885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.377 [2024-12-11 10:06:14.746318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.377 [2024-12-11 10:06:14.746336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.377 [2024-12-11 10:06:14.746344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.377 [2024-12-11 10:06:14.746522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.377 [2024-12-11 10:06:14.746696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.378 [2024-12-11 10:06:14.746705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.378 [2024-12-11 10:06:14.746711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.378 [2024-12-11 10:06:14.746718] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.378 [2024-12-11 10:06:14.758946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.378 [2024-12-11 10:06:14.759296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.378 [2024-12-11 10:06:14.759314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.378 [2024-12-11 10:06:14.759322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.378 [2024-12-11 10:06:14.759494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.378 [2024-12-11 10:06:14.759668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.378 [2024-12-11 10:06:14.759677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.378 [2024-12-11 10:06:14.759683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.378 [2024-12-11 10:06:14.759689] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:05.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:05.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:05.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:05.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:05.378 [2024-12-11 10:06:14.771929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.378 [2024-12-11 10:06:14.772335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.378 [2024-12-11 10:06:14.772354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.378 [2024-12-11 10:06:14.772361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.378 [2024-12-11 10:06:14.772535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.378 [2024-12-11 10:06:14.772708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.378 [2024-12-11 10:06:14.772718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.378 [2024-12-11 10:06:14.772725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.378 [2024-12-11 10:06:14.772731] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.378 [2024-12-11 10:06:14.784970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.378 [2024-12-11 10:06:14.785329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.378 [2024-12-11 10:06:14.785346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.378 [2024-12-11 10:06:14.785357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.378 [2024-12-11 10:06:14.785531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.378 [2024-12-11 10:06:14.785705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.378 [2024-12-11 10:06:14.785714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.378 [2024-12-11 10:06:14.785720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.378 [2024-12-11 10:06:14.785727] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.378 [2024-12-11 10:06:14.797945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.378 [2024-12-11 10:06:14.798302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.378 [2024-12-11 10:06:14.798319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.378 [2024-12-11 10:06:14.798326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.378 [2024-12-11 10:06:14.798499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.378 [2024-12-11 10:06:14.798674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.378 [2024-12-11 10:06:14.798683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.378 [2024-12-11 10:06:14.798690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.378 [2024-12-11 10:06:14.798696] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:05.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:05.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:05.378 [2024-12-11 10:06:14.807429] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.378 [2024-12-11 10:06:14.811098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.378 [2024-12-11 10:06:14.811510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.378 [2024-12-11 10:06:14.811527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.378 [2024-12-11 10:06:14.811534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.378 [2024-12-11 10:06:14.811707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.378 [2024-12-11 10:06:14.811879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.378 [2024-12-11 10:06:14.811888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.378 [2024-12-11 10:06:14.811894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.378 [2024-12-11 10:06:14.811901] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:05.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:05.378 [2024-12-11 10:06:14.824110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.378 [2024-12-11 10:06:14.824551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.378 [2024-12-11 10:06:14.824569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.378 [2024-12-11 10:06:14.824576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.378 [2024-12-11 10:06:14.824750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.378 [2024-12-11 10:06:14.824923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.378 [2024-12-11 10:06:14.824932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.378 [2024-12-11 10:06:14.824938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.378 [2024-12-11 10:06:14.824944] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.378 [2024-12-11 10:06:14.837153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.378 [2024-12-11 10:06:14.837582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.378 [2024-12-11 10:06:14.837599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.378 [2024-12-11 10:06:14.837607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.378 [2024-12-11 10:06:14.837780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.378 [2024-12-11 10:06:14.837954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.378 [2024-12-11 10:06:14.837962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.378 [2024-12-11 10:06:14.837968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.378 [2024-12-11 10:06:14.837975] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.378 [2024-12-11 10:06:14.850194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.378 [2024-12-11 10:06:14.850635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.378 [2024-12-11 10:06:14.850653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.378 [2024-12-11 10:06:14.850661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.378 [2024-12-11 10:06:14.850834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.378 [2024-12-11 10:06:14.851007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.378 [2024-12-11 10:06:14.851016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.378 [2024-12-11 10:06:14.851023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.378 [2024-12-11 10:06:14.851030] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.378 Malloc0 00:28:05.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:05.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.378 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:05.378 [2024-12-11 10:06:14.863162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.378 [2024-12-11 10:06:14.863595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.379 [2024-12-11 10:06:14.863612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.379 [2024-12-11 10:06:14.863619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.379 [2024-12-11 10:06:14.863792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.379 [2024-12-11 10:06:14.863965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.379 [2024-12-11 10:06:14.863974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.379 [2024-12-11 10:06:14.863981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.379 [2024-12-11 10:06:14.863988] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.379 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.379 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:05.379 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.379 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:05.379 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.379 [2024-12-11 10:06:14.876187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.379 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:05.379 [2024-12-11 10:06:14.876619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.379 [2024-12-11 10:06:14.876636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb20 with addr=10.0.0.2, port=4420 00:28:05.379 [2024-12-11 10:06:14.876644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb20 is same with the state(6) to be set 00:28:05.379 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.379 [2024-12-11 10:06:14.876817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb20 (9): Bad file descriptor 00:28:05.379 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:05.379 [2024-12-11 10:06:14.876991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.379 [2024-12-11 10:06:14.877000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.379 [2024-12-11 10:06:14.877007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.379 [2024-12-11 10:06:14.877013] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.379 [2024-12-11 10:06:14.879361] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.379 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.379 10:06:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 243487 00:28:05.379 [2024-12-11 10:06:14.889205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.379 [2024-12-11 10:06:14.916226] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:06.755 4921.67 IOPS, 19.23 MiB/s [2024-12-11T09:06:17.265Z] 5832.57 IOPS, 22.78 MiB/s [2024-12-11T09:06:18.201Z] 6527.50 IOPS, 25.50 MiB/s [2024-12-11T09:06:19.136Z] 7085.56 IOPS, 27.68 MiB/s [2024-12-11T09:06:20.072Z] 7501.90 IOPS, 29.30 MiB/s [2024-12-11T09:06:21.008Z] 7844.91 IOPS, 30.64 MiB/s [2024-12-11T09:06:22.385Z] 8137.00 IOPS, 31.79 MiB/s [2024-12-11T09:06:23.321Z] 8377.23 IOPS, 32.72 MiB/s [2024-12-11T09:06:24.257Z] 8583.07 IOPS, 33.53 MiB/s 00:28:14.683 Latency(us) 00:28:14.683 [2024-12-11T09:06:24.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.683 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:14.683 Verification LBA range: start 0x0 length 0x4000 00:28:14.683 Nvme1n1 : 15.00 8770.28 34.26 11137.87 0.00 6409.87 577.34 14230.67 00:28:14.683 [2024-12-11T09:06:24.258Z] =================================================================================================================== 00:28:14.683 [2024-12-11T09:06:24.258Z] Total : 8770.28 34.26 11137.87 0.00 6409.87 577.34 14230.67 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:14.683 rmmod nvme_tcp 00:28:14.683 rmmod nvme_fabrics 00:28:14.683 rmmod nvme_keyring 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 244835 ']' 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 244835 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 244835 ']' 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 244835 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:14.683 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 244835 00:28:14.942 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:14.942 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:14.942 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 244835' 00:28:14.942 killing process with pid 244835 00:28:14.942 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 244835 00:28:14.942 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 244835 00:28:14.942 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:14.942 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:14.942 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:14.942 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:14.942 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:14.942 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:14.942 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:14.942 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:14.942 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:14.942 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.942 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.942 10:06:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:17.571 00:28:17.571 real 0m26.884s 00:28:17.571 user 1m0.818s 00:28:17.571 sys 0m7.472s 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:17.571 ************************************ 00:28:17.571 END TEST nvmf_bdevperf 00:28:17.571 ************************************ 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.571 ************************************ 00:28:17.571 START TEST nvmf_target_disconnect 00:28:17.571 ************************************ 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:17.571 * Looking for test storage... 00:28:17.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:17.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.571 --rc genhtml_branch_coverage=1 00:28:17.571 --rc genhtml_function_coverage=1 00:28:17.571 --rc genhtml_legend=1 00:28:17.571 --rc geninfo_all_blocks=1 00:28:17.571 --rc geninfo_unexecuted_blocks=1 00:28:17.571 00:28:17.571 ' 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:17.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.571 --rc genhtml_branch_coverage=1 00:28:17.571 --rc genhtml_function_coverage=1 00:28:17.571 --rc genhtml_legend=1 00:28:17.571 --rc geninfo_all_blocks=1 00:28:17.571 --rc geninfo_unexecuted_blocks=1 00:28:17.571 00:28:17.571 ' 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:17.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.571 --rc genhtml_branch_coverage=1 00:28:17.571 --rc genhtml_function_coverage=1 00:28:17.571 --rc genhtml_legend=1 00:28:17.571 --rc geninfo_all_blocks=1 00:28:17.571 --rc geninfo_unexecuted_blocks=1 00:28:17.571 00:28:17.571 ' 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:17.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.571 --rc genhtml_branch_coverage=1 00:28:17.571 --rc genhtml_function_coverage=1 00:28:17.571 --rc genhtml_legend=1 00:28:17.571 --rc geninfo_all_blocks=1 00:28:17.571 --rc geninfo_unexecuted_blocks=1 00:28:17.571 00:28:17.571 ' 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.571 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:17.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:17.572 10:06:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:24.142 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:24.142 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:24.142 Found net devices under 0000:af:00.0: cvl_0_0 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:24.142 Found net devices under 0000:af:00.1: cvl_0_1 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:24.142 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:24.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:28:24.142 00:28:24.142 --- 10.0.0.2 ping statistics --- 00:28:24.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.143 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:28:24.143 00:28:24.143 --- 10.0.0.1 ping statistics --- 00:28:24.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.143 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:24.143 ************************************ 00:28:24.143 START TEST nvmf_target_disconnect_tc1 00:28:24.143 ************************************ 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:24.143 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:24.402 [2024-12-11 10:06:33.716678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.402 [2024-12-11 10:06:33.716731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1095410 with addr=10.0.0.2, port=4420 00:28:24.402 [2024-12-11 10:06:33.716751] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:24.402 [2024-12-11 10:06:33.716766] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:24.402 [2024-12-11 10:06:33.716772] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:24.402 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:24.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:24.402 Initializing NVMe Controllers 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:24.402 00:28:24.402 real 0m0.127s 00:28:24.402 user 0m0.050s 00:28:24.402 sys 0m0.077s 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:24.402 ************************************ 00:28:24.402 END TEST nvmf_target_disconnect_tc1 00:28:24.402 ************************************ 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:24.402 ************************************ 00:28:24.402 START TEST nvmf_target_disconnect_tc2 00:28:24.402 ************************************ 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=250287 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 250287 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 250287 ']' 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:24.402 10:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.402 [2024-12-11 10:06:33.857177] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:28:24.402 [2024-12-11 10:06:33.857226] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.402 [2024-12-11 10:06:33.925969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:24.402 [2024-12-11 10:06:33.967353] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.402 [2024-12-11 10:06:33.967391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.402 [2024-12-11 10:06:33.967398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.402 [2024-12-11 10:06:33.967404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.402 [2024-12-11 10:06:33.967409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.402 [2024-12-11 10:06:33.968877] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:28:24.402 [2024-12-11 10:06:33.968985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:28:24.402 [2024-12-11 10:06:33.969092] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:24.402 [2024-12-11 10:06:33.969093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.661 Malloc0 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.661 [2024-12-11 10:06:34.151764] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.661 [2024-12-11 10:06:34.180766] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=250318 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:24.661 10:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:27.222 10:06:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 250287 00:28:27.222 10:06:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 [2024-12-11 10:06:36.213575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 [2024-12-11 10:06:36.213778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Write completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 Read completed with error (sct=0, sc=8) 00:28:27.222 starting I/O failed 00:28:27.222 [2024-12-11 10:06:36.213965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.222 [2024-12-11 10:06:36.214136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.222 [2024-12-11 10:06:36.214159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.222 qpair failed and we were unable to recover it. 00:28:27.222 [2024-12-11 10:06:36.214253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.222 [2024-12-11 10:06:36.214263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.222 qpair failed and we were unable to recover it. 00:28:27.222 [2024-12-11 10:06:36.214468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.222 [2024-12-11 10:06:36.214479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.222 qpair failed and we were unable to recover it. 00:28:27.222 [2024-12-11 10:06:36.214677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.222 [2024-12-11 10:06:36.214695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.222 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.214824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.214834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.214898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.214909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.215071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.215082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.215184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.215213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.215365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.215396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.215523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.215555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.215801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.215834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.215973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.216005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.216124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.216156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.216298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.216331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.216451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.216463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.216530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.216541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.216650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.216681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.216802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.216834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.217138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.217171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.217283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.217295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.217362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.217373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.217445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.217456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.217669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.217682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.217910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.217943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.218086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.218117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.218246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.218279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.218408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.218441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.218547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.218578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.218771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.218804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.218987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.219020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.219196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.219263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.219411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.219446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.219662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.219674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.219828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.219861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.219993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.220027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.220212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.220255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.220440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.220474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.220647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.220680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.220798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.220830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.221007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.221040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.221239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.221274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.221450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.221483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.223 qpair failed and we were unable to recover it. 00:28:27.223 [2024-12-11 10:06:36.221675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.223 [2024-12-11 10:06:36.221707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.221825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.221865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.222058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.222091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.222229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.222265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.222478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.222511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.222756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.222788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.222973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.223006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.223192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.223235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.223413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.223444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.223550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.223561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.223690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.223712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.223792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.223803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.223894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.223904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.224044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.224077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.224255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.224287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.224412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.224444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.224564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.224579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.224653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.224667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.224814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.224846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.224966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.224997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.225175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.225207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.225380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.225396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.225469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.225482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.225697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.225712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.225878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.225911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.226081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.226114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.226357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.226391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 [2024-12-11 10:06:36.226583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.224 [2024-12-11 10:06:36.226597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.224 qpair failed and we were unable to recover it. 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Write completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Write completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Write completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Write completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Write completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Read completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Write completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 Write completed with error (sct=0, sc=8) 00:28:27.224 starting I/O failed 00:28:27.224 [2024-12-11 10:06:36.226858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:27.224 [2024-12-11 10:06:36.226937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.226963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.227102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.227118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.227278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.227312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.227496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.227528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.227713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.227745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.227862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.227893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.228069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.228101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.228286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.228320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.228439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.228454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.228555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.228570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.228800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.228833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.228964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.228995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.229240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.229274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.229387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.229420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.229538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.229570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.229810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.229842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.230041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.230073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.230252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.230289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.230455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.230470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.230633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.230648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.230730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.230743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.230891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.230906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.231080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.231095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.231161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.231174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.231257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.231272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.231417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.231432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.231507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.231521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.231656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.231671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.231751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.231764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.231894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.231908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.232135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.232166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.232293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.232326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.232507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.232539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.232662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.232699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.232813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.232845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.232984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.233015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.233129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.233161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.233299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.233332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.233531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.225 [2024-12-11 10:06:36.233563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-11 10:06:36.233749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.233782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.234043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.234077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.234323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.234358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.234471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.234503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.234621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.234653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.234759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.234792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.234967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.235000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.235183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.235225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.235474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.235507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.235769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.235801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.235922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.235953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.236066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.236097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.236290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.236323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.236506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.236539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.236666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.236698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.236812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.236843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.237020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.237053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.237292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.237325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.237515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.237548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.237764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.237797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.238057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.238090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.238231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.238264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.238450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.238482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.238671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.238705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.238943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.238976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.239097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.239130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.239308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.239342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.239598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.239631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.239887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.239920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.240190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.240232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.240350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.226 [2024-12-11 10:06:36.240402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-11 10:06:36.240589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.240622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.240873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.240905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.241085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.241117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.241239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.241278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.241401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.241434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.241610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.241642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.241770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.241802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.241933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.241966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.242133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.242165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.242297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.242330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.242509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.242542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.242727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.242759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.242934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.242966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.243243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.243277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.243379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.243410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.243533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.243565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.243740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.243773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.244001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.244035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.244231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.244265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.244436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.244469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.244657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.244688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.244820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.244853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.244975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.245006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.245127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.245158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.245384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.245418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.245532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.245565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.245741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.245773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.245947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.245978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.246157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.246188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.246372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.246405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.246679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.246751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.246986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.247023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.247244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.247281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.247525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.247557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.247798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.247832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.227 [2024-12-11 10:06:36.248024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.227 [2024-12-11 10:06:36.248057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.227 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.248184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.248231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.248417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.248450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.248564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.248597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.248777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.248809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.248989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.249021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.249193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.249240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.249447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.249479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.249597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.249629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.249911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.249946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.250133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.250166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.250303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.250337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.250533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.250566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.250829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.250861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.251150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.251183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.251319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.251353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.251481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.251514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.251690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.251722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.251850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.251882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.252088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.252123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.252363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.252397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.252530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.252563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.252731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.252777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.253020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.253053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.253169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.253202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.253393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.253427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.253688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.253720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.253927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.253959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.254090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.254122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.254306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.254340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.254541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.254574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.254689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.254722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.254994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.255028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.255196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.255236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.255365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.255399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.255594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.255627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.255803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.255835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.256009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.256042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.256156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.256189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.256307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.256341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.228 [2024-12-11 10:06:36.256515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.228 [2024-12-11 10:06:36.256549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.228 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.256681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.256714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.256830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.256863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.257054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.257086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.257353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.257388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.257598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.257630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.257903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.257936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.258116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.258148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.258387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.258421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.258604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.258641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.258818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.258850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.259117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.259150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.259273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.259306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.259573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.259605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.259846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.259880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.260143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.260176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.260310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.260344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.260604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.260637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.260842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.260874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.261057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.261090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.261328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.261361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.261559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.261592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.261763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.261795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.261982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.262016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.262200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.262242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.262373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.262405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.262648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.262681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.262797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.262830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.262952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.262985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.263160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.263193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.263379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.263412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.263598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.263631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.263873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.263905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.264146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.264179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.264432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.264467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.264659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.264691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.264955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.264988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.265179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.265212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.265420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.265452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.265623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.229 [2024-12-11 10:06:36.265655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.229 qpair failed and we were unable to recover it. 00:28:27.229 [2024-12-11 10:06:36.265878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.265910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.266044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.266076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.266182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.266214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.266431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.266464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.266578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.266610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.266734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.266766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.266941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.266974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.267242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.267276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.267563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.267595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.267784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.267816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.267943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.267981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.268253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.268286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.268476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.268509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.268723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.268755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.268946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.268979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.269266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.269301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.269597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.269630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.269850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.269883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.270060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.270092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.270332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.270365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.270555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.270588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.270707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.270738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.271002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.271035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.271298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.271331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.271529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.271561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.271746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.271779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.271986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.272019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.272195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.272237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.272351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.272384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.272641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.272673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.272839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.272872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.273009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.273041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.273213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.273272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.273444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.273475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.273717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.273749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.273870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.273904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.274095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.274127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.274305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.274344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.274517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.274549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.274784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.230 [2024-12-11 10:06:36.274817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.230 qpair failed and we were unable to recover it. 00:28:27.230 [2024-12-11 10:06:36.274948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.274981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.275253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.275287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.275546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.275579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.275705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.275738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.275914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.275947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.276130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.276163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.276277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.276310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.276548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.276580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.276819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.276852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.276984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.277017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.277130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.277162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.277353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.277385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.277636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.277668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.277839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.277871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.278004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.278037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.278165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.278199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.278396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.278429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.278620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.278653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.278825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.278858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.279032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.279065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.279268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.279302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.279474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.279507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.279617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.279651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.279842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.279875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.280060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.280093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.280310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.280345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.280587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.280620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.280804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.280836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.281011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.281044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.281298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.281332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.281515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.281548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.281662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.281695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.281874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.281906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.282113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.282147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.231 [2024-12-11 10:06:36.282333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.231 [2024-12-11 10:06:36.282367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.231 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.282622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.282654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.282829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.282861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.282998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.283032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.283239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.283279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.283406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.283439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.283541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.283573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.283783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.283816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.284072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.284105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.284355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.284388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.284509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.284543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.284808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.284841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.285027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.285060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.285240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.285275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.285398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.285430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.285610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.285643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.285774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.285807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.285978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.286011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.286190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.286232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.286495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.286527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.286775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.286807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.287013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.287047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.287172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.287204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.287409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.287444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.287620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.287654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.287864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.287896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.288023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.288056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.288253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.288287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.288407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.288440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.288643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.288676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.288868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.288903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.289015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.289048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.289257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.289292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.289462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.289494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.289686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.289718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.289904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.289936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.290145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.290178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.290359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.290393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.290563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.290595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.290706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.232 [2024-12-11 10:06:36.290740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.232 qpair failed and we were unable to recover it. 00:28:27.232 [2024-12-11 10:06:36.290925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.290958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.291199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.291239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.291420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.291453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.291582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.291616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.291797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.291828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.292082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.292115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.292231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.292266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.292454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.292487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.292616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.292649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.292767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.292801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.293037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.293069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.293322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.293356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.293613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.293646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.293753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.293786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.293988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.294020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.294151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.294184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.294437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.294471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.294599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.294631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.294822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.294854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.295040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.295074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.295246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.295290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.295429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.295463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.295720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.295753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.295924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.295956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.296228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.296261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.296393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.296426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.296665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.296698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.296871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.296904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.297108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.297141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.297273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.297309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.297479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.297511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.297649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.297682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.297870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.297914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.298035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.298068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.298197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.298305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.298511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.298543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.298675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.298708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.298951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.298985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.299110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.299144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.299262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.233 [2024-12-11 10:06:36.299295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.233 qpair failed and we were unable to recover it. 00:28:27.233 [2024-12-11 10:06:36.299410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.299444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.299646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.299678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.299788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.299820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.299951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.299984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.300171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.300203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.300383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.300418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.300540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.300573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.300766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.300798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.300985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.301019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.301145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.301179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.301329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.301364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.301549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.301581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.301845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.301878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.302086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.302119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.302236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.302270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.302400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.302433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.302610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.302642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.302842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.302875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.303138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.303172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.303324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.303359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.303477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.303510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.303647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.303679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.303916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.303949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.304190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.304233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.304474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.304506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.304745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.304777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.304982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.305014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.305132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.305165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.305436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.305470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.305715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.305748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.305927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.305959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.306148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.306181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.306381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.306415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.306588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.306625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.306742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.306776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.306907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.306941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.307205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.307258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.307388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.307420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.307619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.307652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.234 [2024-12-11 10:06:36.307888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.234 [2024-12-11 10:06:36.307921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.234 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.308049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.308082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.308206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.308250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.308492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.308525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.308715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.308747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.308986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.309019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.309268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.309302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.309420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.309453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.309645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.309678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.309792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.309825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.310011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.310043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.310162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.310195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.310472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.310505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.310635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.310667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.310784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.310816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.311078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.311111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.311251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.311285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.311475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.311508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.311679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.311711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.311992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.312024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.312268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.312301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.312542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.312579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.312689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.312722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.312838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.312871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.313042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.313077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.313315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.313348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.313459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.313491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.313677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.313709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.313878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.313911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.314109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.314142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.314312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.314345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.314533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.314566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.314808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.314841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.315039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.315071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.315189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.315227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.315369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.235 [2024-12-11 10:06:36.315402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.235 qpair failed and we were unable to recover it. 00:28:27.235 [2024-12-11 10:06:36.315538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.315570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.315748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.315780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.315950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.315983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.316174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.316206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.316399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.316432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.316623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.316655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.316761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.316793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.316917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.316950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.317069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.317102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.317284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.317318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.317490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.317522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.317777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.317809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.317995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.318028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.318145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.318177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.318373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.318406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.318528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.318560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.318693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.318725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.318921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.318954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.319123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.319157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.319353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.319387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.319508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.319541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.319711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.319744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.320038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.320071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.320194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.320238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.320346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.320379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.320513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.320545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.320744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.320782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.320917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.320950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.321083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.321115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.321302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.321335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.321505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.321538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.321655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.321688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.321870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.321902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.322038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.322070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.322268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.322302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.322432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.322464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.322703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.322735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.322919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.322951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.323056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.323088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.236 qpair failed and we were unable to recover it. 00:28:27.236 [2024-12-11 10:06:36.323290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.236 [2024-12-11 10:06:36.323320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.323503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.323535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.323658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.323691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.323937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.323969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.324143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.324176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.324302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.324337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.324511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.324543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.324813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.324845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.325092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.325126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.325315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.325348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.325634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.325667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.325846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.325878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.325986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.326019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.326284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.326317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.326578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.326616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.326753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.326786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.327054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.327087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.327204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.327247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.327371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.327404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.327576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.327610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.327796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.327830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.328008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.328042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.328283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.328317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.328438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.328470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.328673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.328706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.328888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.328922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.329179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.329212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.329404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.329437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.329625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.329658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.329863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.329895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.330067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.330100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.330356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.330389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.330630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.330663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.330794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.330827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.330931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.330962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.331199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.331264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.331501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.331533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.331659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.331691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.331814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.331846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.331976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.237 [2024-12-11 10:06:36.332008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.237 qpair failed and we were unable to recover it. 00:28:27.237 [2024-12-11 10:06:36.332140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.332174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.332444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.332477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.332677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.332710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.332881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.332914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.333096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.333129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.333254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.333287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.333490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.333522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.333712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.333745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.334296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.334335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.334531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.334564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.334675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.334707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.334905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.334938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.335108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.335139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.335313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.335346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.335517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.335549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.335723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.335762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.336001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.336035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.336150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.336183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.336457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.336491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.336608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.336641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.336845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.336877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.337086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.337119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.337388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.337421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.337562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.337595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.337793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.337825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.337939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.337971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.338237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.338270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.338453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.338486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.338608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.338640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.338747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.338780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.338879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.338911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.339031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.339064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.339184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.339228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.339399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.339432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.339546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.339578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.339842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.339875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.340047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.340079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.340270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.340304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.340515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.340547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.340671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.238 [2024-12-11 10:06:36.340704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.238 qpair failed and we were unable to recover it. 00:28:27.238 [2024-12-11 10:06:36.340945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.340977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.341189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.341230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.341446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.341490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.341733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.341765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.341880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.341912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.342265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.342299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.342508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.342540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.342726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.342760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.342945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.342977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.343165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.343198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.343324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.343357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.343530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.343562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.343742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.343774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.343946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.343978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.344261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.344295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.344496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.344530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.344756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.344841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.344980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.345017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.345209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.345260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.345368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.345400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.345643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.345675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.345815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.345848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.345966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.345998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.346132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.346165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.346447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.346482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.346684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.346716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.346952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.346986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.347112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.347145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.347326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.347360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.347630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.347671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.347792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.347825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.348010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.348043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.348255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.348289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.348475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.348509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.348629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.348661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.348850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.348884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.349071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.349103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.349350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.349385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.239 qpair failed and we were unable to recover it. 00:28:27.239 [2024-12-11 10:06:36.349553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.239 [2024-12-11 10:06:36.349586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.349700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.349733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.349937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.349969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.350139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.350172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.350425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.350460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.350649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.350682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.350918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.350951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.351151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.351184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.351387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.351423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.351610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.351643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.351820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.351853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.352058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.352090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.352353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.352386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.352499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.352531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.352719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.352752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.352880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.352912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.353123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.353155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.353297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.353331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.353454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.353492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.353604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.353638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.353831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.353864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.354038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.354070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.354265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.354300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.354408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.354440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.354626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.354659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.354848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.354882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.355124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.355156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.355350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.355384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.355520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.355554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.355752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.355785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.355905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.355938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.356126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.356159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.356449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.356483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.356670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.356703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.356907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.356939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.357213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.240 [2024-12-11 10:06:36.357258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.240 qpair failed and we were unable to recover it. 00:28:27.240 [2024-12-11 10:06:36.357431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.357463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.357647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.357680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.357902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.357936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.358119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.358152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.358264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.358296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.358536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.358568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.358758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.358791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.359030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.359063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.359270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.359304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.359433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.359465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.359730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.359764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.359934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.359965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.360074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.360106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.360213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.360252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.360506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.360539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.360749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.360782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.360889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.360922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.361099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.361132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.361397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.361430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.361535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.361568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.361674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.361707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.361962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.361994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.362173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.362205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.362346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.362381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.362583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.362615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.362717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.362750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.362932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.362965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.363137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.363169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.363354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.363388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.363593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.363626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.363840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.363873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.364066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.364099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.364231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.364266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.364454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.364488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.364667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.364701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.364821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.364853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.365059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.365092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.365232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.365267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.365461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.365494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.365622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.241 [2024-12-11 10:06:36.365655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.241 qpair failed and we were unable to recover it. 00:28:27.241 [2024-12-11 10:06:36.365905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.365937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.366068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.366100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.366305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.366339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.366518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.366551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.366725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.366757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.366951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.366983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.367162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.367196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.367325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.367357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.367546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.367579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.367806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.367837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.368104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.368142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.368335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.368368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.368634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.368666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.368852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.368884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.369053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.369086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.369358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.369391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.369580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.369611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.369787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.369820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.369992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.370025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.370133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.370165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.370355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.370388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.370562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.370595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.370738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.370770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.370961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.370994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.371181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.371214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.371407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.371440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.371549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.371581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.371709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.371743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.371942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.371976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.372160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.372192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.372323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.372356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.372595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.372629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.372869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.372901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.373012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.373045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.373244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.373278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.373405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.373438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.373558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.373591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.373769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.373801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.373925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.373957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.242 qpair failed and we were unable to recover it. 00:28:27.242 [2024-12-11 10:06:36.374200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.242 [2024-12-11 10:06:36.374258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.374392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.374425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.374545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.374577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.374778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.374811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.374941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.374975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.375244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.375278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.375523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.375555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.375766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.375799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.376055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.376088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.376266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.376299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.376441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.376473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.376660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.376693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.376818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.376855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.376990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.377022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.377194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.377238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.377346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.377376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.377616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.377648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.377775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.377808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.377990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.378023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.378289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.378322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.378428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.378461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.378653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.378685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.378871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.378903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.379073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.379106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.379295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.379330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.379455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.379487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.379752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.379785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.380021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.380053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.380192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.380231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.380350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.380382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.380508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.380541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.380781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.380815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.380993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.381026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.381149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.381181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.381373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.381407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.381582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.381614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.381724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.381756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.381889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.381922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.382093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.382126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.382300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.382340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.243 qpair failed and we were unable to recover it. 00:28:27.243 [2024-12-11 10:06:36.382528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.243 [2024-12-11 10:06:36.382561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.382733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.382766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.382889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.382922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.383164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.383197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.383325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.383358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.383599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.383632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.383813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.383845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.384030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.384063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.384179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.384213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.384413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.384446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.384656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.384688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.384862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.384895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.385092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.385124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.385313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.385347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.385547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.385580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.385767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.385799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.385935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.385968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.386235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.386268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.386449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.386483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.386617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.386650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.386830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.386863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.387035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.387069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.387201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.387244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.387430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.387462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.387708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.387741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.387984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.388017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.388207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.388251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.388377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.388409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.388524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.388559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.388747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.388779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.388961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.388994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.389164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.389197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.389446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.389479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.389605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.389638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.389763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.389795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.389925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.244 [2024-12-11 10:06:36.389957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.244 qpair failed and we were unable to recover it. 00:28:27.244 [2024-12-11 10:06:36.390129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.390161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.390374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.390408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.390525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.390556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.390765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.390799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.390917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.390960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.391134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.391167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.391364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.391397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.391577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.391609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.391727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.391759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.391878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.391911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.392096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.392129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.392255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.392288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.392569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.392602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.392803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.392835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.392966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.392999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.393245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.393279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.393468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.393500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.393632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.393665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.393857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.393889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.394006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.394038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.394251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.394285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.394398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.394430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.394544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.394576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.394750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.394783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.394953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.394984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.395245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.395278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.395482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.395514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.395639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.395671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.395799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.395831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.396095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.396127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.396258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.396292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.396496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.396533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.396733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.396766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.396893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.396925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.397180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.397212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.397359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.397392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.397653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.397686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.397865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.397898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.398085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.398118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.245 qpair failed and we were unable to recover it. 00:28:27.245 [2024-12-11 10:06:36.398293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.245 [2024-12-11 10:06:36.398326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.398440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.398471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.398643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.398675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.398790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.398823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.399008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.399040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.399289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.399322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.399544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.399576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.399759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.399791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.399906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.399939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.400044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.400075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.400201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.400244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.400359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.400391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.400583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.400616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.400753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.400786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.400899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.400932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.401037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.401068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.401260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.401294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.401463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.401496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.401619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.401652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.401905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.401937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.402123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.402156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.402454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.402488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.402695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.402727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.402900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.402932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.403116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.403147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.403347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.403381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.403648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.403681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.403819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.403850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.403984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.404016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.404133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.404165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.404394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.404426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.404554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.404586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.404799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.404832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.405008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.405045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.405249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.405283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.405497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.405531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.405787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.405820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.406018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.406051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.406173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.406205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.406397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.246 [2024-12-11 10:06:36.406430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.246 qpair failed and we were unable to recover it. 00:28:27.246 [2024-12-11 10:06:36.406563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.406595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.406786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.406820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.406935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.406968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.407150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.407184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.407436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.407469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.407598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.407631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.407827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.407861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.408046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.408080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.408267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.408300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.408482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.408514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.408699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.408731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.408969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.409002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.409170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.409202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.409393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.409426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.409629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.409661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.409844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.409876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.410065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.410097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.410269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.410302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.410540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.410573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.410819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.410851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.410962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.411002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.411104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.411138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.411256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.411289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.411470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.411502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.411764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.411796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.411982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.412014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.412191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.412234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.412493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.412525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.412704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.412736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.412854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.412887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.413124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.413157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.413351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.413385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.413650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.413682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.413813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.413845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.413958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.413992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.414105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.414137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.414305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.414339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.414593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.414625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.414829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.414861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.415120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.247 [2024-12-11 10:06:36.415152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.247 qpair failed and we were unable to recover it. 00:28:27.247 [2024-12-11 10:06:36.415413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.415445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.415650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.415682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.415889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.415923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.416103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.416135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.416263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.416297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.416488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.416521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.416637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.416670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.416861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.416893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.417083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.417117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.417369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.417403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.417528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.417560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.417770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.417802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.417930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.417962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.418064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.418097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.418342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.418376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.418564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.418596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.418792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.418825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.418957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.418989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.419176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.419209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.419426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.419460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.419671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.419704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.419888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.419927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.420117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.420148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.420260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.420294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.420479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.420510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.420617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.420650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.420770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.420803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.420995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.421027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.421199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.421240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.421346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.421379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.421568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.421601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.421772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.421804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.421933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.421967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.422175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.422207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.422419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.248 [2024-12-11 10:06:36.422452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.248 qpair failed and we were unable to recover it. 00:28:27.248 [2024-12-11 10:06:36.422644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.422677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.422859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.422891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.423078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.423110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.423373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.423407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.423586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.423619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.423764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.423798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.423987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.424019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.424253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.424288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.424396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.424428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.424688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.424720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.424838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.424870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.425009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.425042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.425281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.425314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.425424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.425457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.425636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.425668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.425808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.425840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.426034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.426066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.426245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.426278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.426409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.426441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.426619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.426651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.426759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.426792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.426900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.426932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.427128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.427159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.427299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.427332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.427505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.427537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.427708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.427740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.427977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.428011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.428211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.428255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.428371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.428403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.428640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.428672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.428799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.428831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.429037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.429069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.429256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.429289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.429405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.429438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.429645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.429678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.429798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.429831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.430010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.430042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.430212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.430254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.430453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.430485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.430667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.249 [2024-12-11 10:06:36.430700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.249 qpair failed and we were unable to recover it. 00:28:27.249 [2024-12-11 10:06:36.430954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.430987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.431183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.431225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.431471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.431505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.431708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.431741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.431948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.431979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.432150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.432183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.432346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.432380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.432641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.432672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.432915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.432947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.433069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.433102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.433339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.433374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.433499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.433531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.433713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.433745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.433927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.433960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.434198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.434250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.434438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.434471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.434583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.434615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.434739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.434771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.434898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.434931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.435056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.435088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.435193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.435232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.435495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.435528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.435706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.435739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.435854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.435888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.436020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.436053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.436251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.436285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.436401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.436433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.436619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.436651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.436833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.436865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.437000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.437032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.437198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.437242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.437371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.437404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.437606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.437637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.437765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.437798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.437985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.438018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.438284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.438317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.438435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.438467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.438586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.438618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.438742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.438774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-12-11 10:06:36.438953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-12-11 10:06:36.438985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.439171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.439205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.439383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.439416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.439536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.439568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.439740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.439773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.439972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.440004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.440183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.440215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.440428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.440462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.440750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.440783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.440912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.440944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.441064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.441096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.441285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.441319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.441568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.441600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.441856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.441888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.442099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.442131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.442255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.442288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.442472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.442505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.442733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.442765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.442953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.442987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.443094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.443126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.443295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.443328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.443447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.443480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.443652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.443683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.443852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.443885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.444022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.444055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.444188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.444249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.444372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.444404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.444595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.444627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.444762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.444794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.444984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.445017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.445136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.445170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.445354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.445388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.445506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.445539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.445660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.445693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.445833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.445865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.446048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.446081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.446189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.446231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.446366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.446398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.446661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.446693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.446954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-12-11 10:06:36.446987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-12-11 10:06:36.447155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.447188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.447406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.447438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.447568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.447600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.447774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.447812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.447995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.448028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.448199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.448242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.448416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.448449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.448567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.448600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.448859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.448891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.449143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.449176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.449310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.449343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.449520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.449552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.449668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.449701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.449890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.449922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.450127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.450160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.450286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.450318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.450581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.450615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.450789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.450822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.450958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.450991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.451163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.451196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.451333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.451366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.451467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.451500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.451623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.451656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.451828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.451859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.452053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.452086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.452324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.452358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.452463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.452495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.452678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.452712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.452850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.452882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.452993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.453026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.453207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.453249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.453363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.453395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.453565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.453598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.453781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.453815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.453992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.454024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.454201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.454242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.454471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.454504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.454695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.454728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.454906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.454938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.455070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-12-11 10:06:36.455102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-12-11 10:06:36.455301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.455334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.455528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.455560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.455740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.455772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.456012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.456044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.456282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.456321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.456448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.456480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.456666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.456699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.456978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.457011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.457146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.457179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.457295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.457329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.457509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.457541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.457665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.457698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.457814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.457846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.458025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.458058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.458275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.458309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.458519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.458552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.458746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.458779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.458973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.459005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.459187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.459228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.459350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.459382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.459646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.459678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.459858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.459889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.460016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.460048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.460257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.460290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.460417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.460450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.460564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.460595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.460827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.460861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.460983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.461016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.461132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.461164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.461345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.461379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.461502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.461535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.461707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.461747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.461940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.461973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.462101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.462133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.462259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-12-11 10:06:36.462293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-12-11 10:06:36.462549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.462580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.462751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.462783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.462966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.462999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.463166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.463198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.463315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.463348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.463469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.463501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.463682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.463714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.463851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.463885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.464009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.464041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.464167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.464199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.464398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.464468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.464672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.464710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.464846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.464880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.465122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.465155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.465361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.465395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.465567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.465597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.465808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.465839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.465969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.466002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.466174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.466207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.466385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.466419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.466612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.466644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.466833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.466865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.466989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.467021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.467273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.467317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.467556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.467588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.467777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.467809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.467987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.468020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.468209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.468252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.468448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.468481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.468613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.468645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.468836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.468868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.469145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.469177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.469391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.469425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.469552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.469585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.469767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.469800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.469902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.469934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.470208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.470251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.470401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.470434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.470561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.470594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-12-11 10:06:36.470785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-12-11 10:06:36.470818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.470995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.471027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.471210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.471252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.471456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.471489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.471628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.471661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.471898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.471930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.472123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.472156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.472348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.472380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.472565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.472597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.472730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.472763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.473001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.473034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.473146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.473180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.473390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.473424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.473609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.473640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.473821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.473853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.474035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.474067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.474181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.474215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.474413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.474446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.474568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.474601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.474781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.474816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.475059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.475092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.475336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.475371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.475502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.475536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.475824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.475857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.476054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.476093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.476285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.476318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.476499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.476531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.476713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.476744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.476852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.476883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.477062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.477095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.477297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.477331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.477454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.477487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.477658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.477692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.477908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.477941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.478129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.478161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.478273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.478305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.478442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.478475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.478677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.478710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.478980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.479013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-12-11 10:06:36.479255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-12-11 10:06:36.479289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.479413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.479445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.479632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.479665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.479914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.479947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.480138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.480170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.480360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.480393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.480499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.480529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.480649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.480681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.480948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.480980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.481154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.481185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.481328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.481370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.481624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.481660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.481826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.481898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.482147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.482185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.482316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.482352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.482551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.482585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.482783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.482816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.483056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.483089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.483360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.483395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.483688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.483721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.483957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.483989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.484174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.484208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.484404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.484438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.484653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.484688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.484875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.484909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.485117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.485160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.485296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.485331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.485514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.485547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.485679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.485713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.485951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.485985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.486240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.486274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.486449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.486481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.486597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.486631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.486813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.486846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.487037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.487070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.487311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.487345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.487537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.487570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.487759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.487791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.487925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.487958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-12-11 10:06:36.488148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-12-11 10:06:36.488182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.488310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.488343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.488513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.488546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.488661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.488693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.488963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.488996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.489181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.489214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.489414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.489447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.489633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.489666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.489848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.489880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.490057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.490090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.490272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.490305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.490478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.490510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.490684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.490718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.490958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.491030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.491277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.491348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.491509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.491546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.491674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.491708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.491885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.491917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.492051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.492084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.492215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.492262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.492401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.492434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.492607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.492640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.492830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.492864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.493041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.493074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.493259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.493293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.493424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.493457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.493642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.493675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.493874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.493908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.494017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.494050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.494239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.494274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.494416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.494449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.494633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.494667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.494839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.494872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.494999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.495031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-12-11 10:06:36.495232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-12-11 10:06:36.495266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.495526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.495559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.495748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.495781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.496021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.496054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.496236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.496270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.496450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.496483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.496677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.496716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.496835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.496867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.496976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.497008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.497195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.497235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.497410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.497444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.497564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.497597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.497782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.497814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.498054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.498086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.498230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.498264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.498468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.498500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.498691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.498723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.498842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.498875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.499081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.499113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.499303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.499337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.499524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.499557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.499800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.499833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.500021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.500053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.500166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.500198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.500470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.500503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.500619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.500651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.500833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.500865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.501057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.501091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.501209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.501252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.501422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.501455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.501578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.501612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.501784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.501816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.501996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.502029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.502199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.502241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.502384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.502416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.502518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.502551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.502741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.502775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.502962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.502994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.503178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.503211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.503333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-12-11 10:06:36.503366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-12-11 10:06:36.503566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.503598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.503773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.503806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.503981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.504014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.504186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.504237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.504382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.504416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.504628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.504662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.504903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.504935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.505135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.505175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.505393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.505430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.505540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.505578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.505693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.505726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.505855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.505890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.506090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.506125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.506249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.506284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.506474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.506509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.506690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.506722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.506898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.506932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.507079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.507113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.507240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.507276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.507541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.507575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.507823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.507862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.507982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.508016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.508135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.508170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.508369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.508405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.508597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.508630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.508749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.508782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.508978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.509010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.509200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.509243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.509433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.509466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.509706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.509739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.509921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.509954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.510071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.510105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.510355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.510388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.510653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.510685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.510897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.510930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.511102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.511135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.511347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.511381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.511515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.511548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.511720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.511753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-12-11 10:06:36.511877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-12-11 10:06:36.511911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.512103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.512136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.512271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.512306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.512414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.512446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.512555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.512588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.512770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.512803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.513041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.513074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.513195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.513237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.513430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.513469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.513768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.513802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.513936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.513969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.514142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.514175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.514308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.514341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.514546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.514578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.514762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.514795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.514980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.515012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.515132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.515165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.515423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.515457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.515650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.515682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.515855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.515888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.516001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.516033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.516298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.516331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.516531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.516565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.516748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.516780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.516894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.516927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.517170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.517203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.517404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.517438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.517624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.517657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.517786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.517819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.518009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.518042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.518225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.518258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.518382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.518415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.518627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.518660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.518848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.518881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.519011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.519044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.519238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.519272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.519461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.519493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.519665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.519698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.519870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.519904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-12-11 10:06:36.520027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-12-11 10:06:36.520059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.520164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.520198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.520346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.520381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.520624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.520657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.520828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.520861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.521080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.521114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.521247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.521281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.521423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.521456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.521583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.521617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.521802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.521840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.522017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.522050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.522234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.522269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.522389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.522422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.522594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.522628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.522830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.522862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.523126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.523159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.523349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.523382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.523631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.523663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.523798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.523832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.524092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.524126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.524257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.524290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.524459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.524491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.524680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.524713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.524910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.524944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.525117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.525150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.525390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.525425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.525637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.525669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.525793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.525827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.525997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.526030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.526214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.526254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.526376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.526408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.526672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.526706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.526829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.526862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.527040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.527074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.527248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.527281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.527540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.527576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.527771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.527805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.527924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.527959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.528136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.528172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-12-11 10:06:36.528421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-12-11 10:06:36.528492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.528726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.528762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.528959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.528992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.529188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.529232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.529429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.529462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.529673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.529705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.529881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.529914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.530046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.530079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.530200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.530239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.530491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.530524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.530638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.530680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.530814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.530845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.531036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.531068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.531265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.531299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.531483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.531516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.531646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.531678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.531914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.531947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.532182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.532216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.532355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.532387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.532586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.532618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.532803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.532836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.533083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.533115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.533381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.533415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.533556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.533588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.533798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.533831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.533967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.533999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.534215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.534256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.534446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.534478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.534662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.534694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.534894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.534926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.535110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.535142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.535313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.535346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.535536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.535569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.535752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.535783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.535967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.535999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-12-11 10:06:36.536174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-12-11 10:06:36.536206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.536341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.536375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.536582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.536615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.536794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.536826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.537099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.537132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.537313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.537346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.537531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.537564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.537683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.537716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.537849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.537881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.538070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.538102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.538226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.538259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.538442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.538474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.538670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.538703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.538823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.538855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.538964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.538997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.539183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.539230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.539420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.539452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.539630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.539663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.539787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.539819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.540012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.540045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.540284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.540318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.540451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.540483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.540672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.540705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.540906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.540938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.541144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.541175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.541317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.541352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.541543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.541575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.541829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.541863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.541992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.542024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.542228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.542262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.542375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.542409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.542522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.542555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.542763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.542795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.542993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.543026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.543212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.543255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.543454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.543487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.543668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.543700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.543889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.543922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.544130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.544163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.544364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-12-11 10:06:36.544398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-12-11 10:06:36.544580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.544613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.544729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.544762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.544894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.544927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.545113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.545146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.545358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.545393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.545507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.545539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.545669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.545702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.545830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.545863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.546117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.546150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.546345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.546378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.546577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.546609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.546814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.546847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.546975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.547008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.547142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.547174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.547366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.547400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.547530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.547569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.547776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.547809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.547996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.548030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.548207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.548247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.548435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.548467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.548638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.548670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.548844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.548878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.549121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.549153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.549274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.549307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.549516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.549548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.549745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.549777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.549894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.549926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.550143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.550175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.550370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.550403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.550649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.550683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.550870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.550903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.551040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.551073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.551190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.551231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.551341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.551372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.551559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.551592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.551773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.551806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.551981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.552014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.552277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.552312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.552504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-12-11 10:06:36.552537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-12-11 10:06:36.552803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.552835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.553001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.553034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.553171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.553203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.553356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.553390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.553522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.553554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.553748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.553781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.553970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.554001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.554121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.554154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.554341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.554376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.554482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.554514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.554687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.554720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.554962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.554994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.555238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.555272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.555463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.555495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.555601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.555632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.555821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.555854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.556033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.556071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.556186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.556226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.556492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.556524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.556713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.556745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.556871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.556903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.557090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.557122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.557307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.557342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.557549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.557581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.557821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.557854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.558069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.558102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.558293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.558327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.558457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.558490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.558684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.558717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.558825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.558857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.559059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.559092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.559208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.559249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.559417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.559449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.559697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.559730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.559915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.559948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.560130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.560163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.560362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.560395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.560517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-12-11 10:06:36.560550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-12-11 10:06:36.560676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.560708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.560951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.560983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.561105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.561138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.561342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.561375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.561576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.561609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.561816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.561850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.561966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.561998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.562131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.562163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.562361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.562394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.562511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.562542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.562808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.562841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.562970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.563003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.563122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.563154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.563457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.563491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.563665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.563699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.563882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.563914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.564032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.564065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.564345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.564379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.564616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.564653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.564787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.564819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.564996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.565029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.565172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.565204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.565401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.565435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.565611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.565643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.565760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.565792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.566051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.566084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.566272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.566306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.566417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.566449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.566640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.566673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.566804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.566836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.567020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.567053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.567231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.567265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.567478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.567511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.567617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.567650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.567752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.567785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.567951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.567984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-12-11 10:06:36.568176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-12-11 10:06:36.568208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.568427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.568460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.568630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.568662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.568851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.568883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.569003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.569036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.569206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.569262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.569366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.569398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.569586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.569618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.569813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.569845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.570021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff2460 is same with the state(6) to be set 00:28:27.267 [2024-12-11 10:06:36.570202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.570267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.570397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.570431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.570697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.570730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.570917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.570951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.571191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.571235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.571420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.571452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.571661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.571694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.571877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.571910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.572045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.572078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.572261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.572297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.572488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.572522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.572650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.572684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.572813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.572846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.573039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.573084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.573358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.573395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.573568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.573601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.573858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.573892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-12-11 10:06:36.574079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-12-11 10:06:36.574112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.574249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.574284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.574406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.574437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.574558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.574591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.574856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.574889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.575074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.575107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.575236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.575269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.575439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.575469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.575587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.575620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.575925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.575974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.576159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.576192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.576386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.576419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.576552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.576585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.576787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.576820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.576938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.576970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.577143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.577176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.577371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.577406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.577580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.577613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.577748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.577780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.577891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.577923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.578181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.578213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.578481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.578514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.578695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.578727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.578860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.578894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.579111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.579142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.579264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.579296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.579430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.579461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.579650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.579683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.579924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.579958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-12-11 10:06:36.580138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-12-11 10:06:36.580169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.580370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.580402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.580585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.580617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.580882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.580914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.581150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.581183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.581387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.581420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.581560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.581593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.581771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.581804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.581944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.581975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.582161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.582194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.582390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.582423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.582558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.582590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.582791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.582824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.582953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.582984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.583087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.583120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.583395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.583429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.583616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.583649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.583829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.583861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.584113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.584145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.584384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.584418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.584534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.584571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.584771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.584802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.584917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.584950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.585076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.585109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.585229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.585262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.585383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.585415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.585596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.585627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.585865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.585897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.586087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.586119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-12-11 10:06:36.586294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-12-11 10:06:36.586326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.586495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.586528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.586713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.586746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.586960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.586993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.587192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.587236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.587432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.587465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.587641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.587672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.587796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.587829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.588008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.588041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.588161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.588192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.588482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.588516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.588701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.588733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.588927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.588959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.589153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.589184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.589403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.589438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.589625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.589659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.589781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.589814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.590005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.590037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.590300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.590339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.590579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.590612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.590812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.590846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.591050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.591083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.591198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.591238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.591433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.591466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.591653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.591686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.591858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.591891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.592077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.592110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.592279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.592313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.592430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.592461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.592670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.592703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.592824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.592856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.593030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.593063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.593193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.593236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.593479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.593513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.593641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.593674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.593784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.593816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.593942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.593975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.270 [2024-12-11 10:06:36.594163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-12-11 10:06:36.594197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.594399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.594433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.594612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.594644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.594850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.594883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.595003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.595036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.595147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.595180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.595314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.595347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.595537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.595569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.595774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.595806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.595984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.596018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.596232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.596267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.596386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.596419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.596592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.596624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.596847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.596878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.597139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.597172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.597441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.597475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.597715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.597748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.597857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.597890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.598002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.598034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.598301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.598335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.598522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.598554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.598662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.598699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.598984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.599018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.599187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.599229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.599434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.599467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.599654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.599687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.599893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.599926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.600140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.600173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.600440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.600474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.600608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.600640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.600907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.600938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.601075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.601109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.601349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.601382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.601502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.601534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-12-11 10:06:36.601635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-12-11 10:06:36.601667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.601788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.601821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.602000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.602033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.602300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.602333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.602454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.602486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.602590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.602622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.602807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.602841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.603032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.603065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.603326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.603359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.603612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.603643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.603824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.603857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.604120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.604152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.604335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.604369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.604492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.604523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.604708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.604741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.604909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.604943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.605118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.605150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.605334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.605368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.605556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.605590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.605782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.605814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.605937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.605970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.606238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.606274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.606382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.606413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.606652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.606685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.606810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.606843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.607102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.607135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.607254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.607287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.607421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.607459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.607664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.607697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.607816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.607848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.608025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.608056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.608235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.608269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.608381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-12-11 10:06:36.608413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-12-11 10:06:36.608581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.608613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.608716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.608750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.608992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.609025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.609145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.609177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.609388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.609421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.609615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.609647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.609842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.609875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.610074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.610108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.610432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.610465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.610665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.610698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.610962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.610996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.611169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.611202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.611350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.611383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.611565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.611598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.611854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.611886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.612076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.612109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.612232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.612266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.612397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.612429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.612619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.612653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.612771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.612803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.612975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.613008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.613132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.613164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.613377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.613410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.613521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.613555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.613810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.613842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.614059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.614092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.614205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.614249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.614433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.614465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.614648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.614680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.614888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.614921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.615094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.615127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.615309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.615343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.615535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.615568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.615799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.615832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-12-11 10:06:36.616049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-12-11 10:06:36.616087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.616344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.616378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.616503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.616534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.616773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.616806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.617016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.617049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.617164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.617196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.617329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.617363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.617484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.617517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.617753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.617785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.617908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.617941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.618110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.618142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.618283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.618316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.618503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.618535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.618718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.618752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.618876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.618909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.619148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.619180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.619383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.619419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.619589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.619623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.619734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.619767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.619971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.620004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.620226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.620260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.620430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.620463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.620670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.620703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.620829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.620862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.621037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.621071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.621308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.621341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.621466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.621499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.621625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.621658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.621859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.621892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.622066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.622099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.622232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.622266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.622448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.622480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.622651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-12-11 10:06:36.622684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-12-11 10:06:36.622939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.622973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.623157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.623190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.623383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.623417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.623602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.623635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.623890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.623923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.624105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.624138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.624315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.624349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.624560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.624598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.624840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.624872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.625051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.625084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.625341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.625374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.625630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.625662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.625844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.625876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.626049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.626082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.626297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.626330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.626587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.626619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.626883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.626916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.627020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.627052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.627238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.627272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.627387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.627420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.627611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.627644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.627932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.627965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.628076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.628109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.628291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.628324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.628533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.628565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.628689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.628723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.628985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.629019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.629146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.629179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.629464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.629499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.629624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.629657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.629837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.629871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.629988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.630020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.630194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.630238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.630424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.630458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.630585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.630617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.630724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.630758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.630931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.630963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-12-11 10:06:36.631147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-12-11 10:06:36.631178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.631432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.631466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.631654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.631685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.631873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.631905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.632119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.632153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.632282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.632315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.632433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.632465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.632644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.632675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.632845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.632878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.632989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.633021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.633135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.633173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.633371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.633405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.633590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.633622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.633795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.633828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.634031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.634065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.634173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.634205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.634396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.634429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.634547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.634578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.634697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.634729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.634848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.634881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.635073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.635106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.635293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.635326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.635447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.635479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.635649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.635681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.635828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.635861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.636037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.636069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.636177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.636209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.636367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.636401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.636662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.636695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.636895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.636929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.637102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.637134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.637322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.637355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.637471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.637503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-12-11 10:06:36.637738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-12-11 10:06:36.637772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.637954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.637987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.638109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.638141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.638330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.638363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.638550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.638583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.638756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.638789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.638987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.639021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.639236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.639270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.639461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.639492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.639627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.639659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.639835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.639869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.640042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.640075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.640198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.640256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.640440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.640471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.640661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.640694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.640896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.640929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.641106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.641138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.641328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.641368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.641610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.641642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.641816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.641848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.641963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.641995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.642179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.642212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.642412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.642444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.642657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.642690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.642869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.642902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.643018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.643051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.643236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.643270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.643455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.643488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.643669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.643701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.643909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.643942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.644142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.644176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.644388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.644423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.644601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.644632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.644767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.644799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.644970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.645004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.645188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.645233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.645477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.645511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-12-11 10:06:36.645770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-12-11 10:06:36.645804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.646027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.646060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.646322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.646355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.646529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.646563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.646681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.646713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.646885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.646918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.647118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.647151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.647333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.647367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.647541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.647574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.647746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.647779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.648043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.648077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.648329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.648362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.648600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.648633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.648757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.648789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.648978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.649009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.649185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.649225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.649419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.649453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.649632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.649665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.649784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.649817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.650112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.650145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.650383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.650423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.650617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.650650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.650773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.650805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.650923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.650956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.651226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.651260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.651473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.651507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.651682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.651714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.651928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.651961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.652082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.652115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.652301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.652334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.652520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.652553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.652735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.652768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.652871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.652902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.653068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.653101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.653309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.653342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.653544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.653577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.653706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.653739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.653924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.653956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.654124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-12-11 10:06:36.654156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-12-11 10:06:36.654358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.654391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.654591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.654624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.654806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.654840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.655018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.655050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.655238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.655273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.655452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.655485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.655657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.655689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.655927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.655960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.656139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.656172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.656506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.656540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.656677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.656710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.656898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.656931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.657123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.657157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.657340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.657374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.657569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.657602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.657838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.657870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.658062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.658093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.658235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.658268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.658480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.658511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.658694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.658727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.658914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.658947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.659188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.659238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.659412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.659445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.659638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.659670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.659853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.659885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.659996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.660029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.660148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.660181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.660388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.660425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.660598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.660631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.660746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.660780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.660952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.660985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.661174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.661206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.661461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.661494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.661749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.661781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.661917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.661949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.662206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.662250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.662505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.662538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.662722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.662755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-12-11 10:06:36.662999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-12-11 10:06:36.663033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.663238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.663272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.663491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.663524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.663702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.663735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.663870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.663904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.664021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.664052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.664265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.664300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.664426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.664458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.664645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.664678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.664944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.664977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.665104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.665136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.665380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.665415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.665547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.665578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.665757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.665790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.665924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.665956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.666141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.666173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.666371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.666403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.666671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.666704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.666944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.666978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.667110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.667143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.667259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.667293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.667463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.667495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.667681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.667713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.667911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.667950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.668055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.668089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.668259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.668292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.668493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.668525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.668648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.668680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.668885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.668918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.669159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.669193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.669320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.669353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.669530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.669563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.669813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.669846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.670023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.670054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.670179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.670212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.670417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.670450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.670635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.670668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.670793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.670826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.671069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.671101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-12-11 10:06:36.671236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-12-11 10:06:36.671271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.671529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.671563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.671801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.671834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.672026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.672058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.672251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.672286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.672407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.672442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.672709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.672742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.672919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.672952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.673076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.673108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.673230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.673265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.673481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.673515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.673644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.673676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.673944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.673977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.674096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.674127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.674377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.674411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.674583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.674617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.674816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.674849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.675090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.675123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.675341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.675376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.675613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.675647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.675773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.675807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.675933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.675965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.676089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.676122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.676333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.676367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.676551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.676589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.676773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.676805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.677044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.677077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.677212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.677254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.677388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.677422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.677663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.677697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.677821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.677856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.677971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.678004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.678125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.678158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.678271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-12-11 10:06:36.678304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-12-11 10:06:36.678409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.678442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.678562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.678594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.678727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.678759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.678949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.678980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.679101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.679133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.679245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.679278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.679397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.679429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.679551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.679582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.679706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.679739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.679845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.679877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.680064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.680096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.680334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.680371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.680550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.680584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.680708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.680742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.680942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.680975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.681160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.681193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.681329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.681362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.681582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.681655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.681859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.681896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.682108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.682142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.682353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.682389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.682529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.682561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.682800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.682833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.682973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.683006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.683114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.683146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.683334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.683367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.683559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.683591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.683782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.683814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.683934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.683967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.684155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.684188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.684333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.684367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.684550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.684585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.684850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.684883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.685005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.685039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.685239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.685272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.685454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.685495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.685738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.685770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-12-11 10:06:36.685954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-12-11 10:06:36.685988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.686166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.686199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.686421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.686454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.686626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.686658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.686841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.686875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.687119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.687152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.687294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.687327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.687573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.687612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.687795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.687829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.687950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.687982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.688233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.688267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.688459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.688492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.688674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.688705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.688833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.688865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.688984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.689018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.689197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.689243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.689428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.689461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.689573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.689605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.689735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.689769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.689961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.689994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.690170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.690204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.690391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.690425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.690552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.690584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.690761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.690794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.691014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.691048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.691233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.691266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.691386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.691419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.691529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.691562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.691681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.691714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.691823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.691856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.691982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.692016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.692146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.692178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.692312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.692347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.692519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.692551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.692683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.692718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.692843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.692876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.693053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.693086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.693207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.693250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.693371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.693404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.693590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-12-11 10:06:36.693622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-12-11 10:06:36.693824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.693857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.694064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.694097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.694210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.694262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.694371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.694403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.694574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.694606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.694724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.694757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.694944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.694977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.695154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.695186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.695329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.695364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.695481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.695513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.695683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.695716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.695982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.696015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.696205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.696253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.696426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.696459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.696636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.696668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.696801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.696833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.696954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.696988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.697263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.697297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.697474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.697506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.697629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.697662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.697778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.697811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.697996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.698029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.698237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.698273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.698478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.698511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.698684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.698716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.698899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.698933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.699048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.699081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.699295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.699328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.699452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.699485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.699586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.699618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.699785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.699818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.699990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.700022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.700260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.700294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.700482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.700515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.700736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.700768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.700872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.700911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.701106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.701138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.701354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.701389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.701599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.701632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-12-11 10:06:36.701895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-12-11 10:06:36.701927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.702127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.702160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.702381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.702414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.702617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.702649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.702836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.702868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.703134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.703167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.703451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.703484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.703621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.703654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.703776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.703808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.703924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.703956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.704063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.704096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.704336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.704370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.704499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.704531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.704772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.704805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.704925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.704957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.705073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.705105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.705239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.705274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.705391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.705424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.705606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.705640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.705761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.705794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.705918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.705952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.706057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.706089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.706353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.706386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.706496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.706528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.706673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.706705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.706825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.706858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.706976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.707008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.707141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.707173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.707352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.707386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.707570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.707603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.707808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.707839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.708017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.708050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.708170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.708202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.708357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.708390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.708511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.708542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-12-11 10:06:36.708748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-12-11 10:06:36.708781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.708903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.708936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.709232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.709271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.709482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.709515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.709694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.709726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.709857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.709889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.710128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.710161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.710349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.710382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.710504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.710536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.710722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.710755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.710872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.710905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.711085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.711118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.711317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.711350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.711527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.711560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.711678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.711711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.711889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.711920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.712050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.712083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.712210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.712256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.712506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.712539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.712740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.712772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.712896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.712930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.713124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.713157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.713301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.713335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.713514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.713547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.713726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.713758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.713924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.713957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.714165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.714198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.714316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.714349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.714527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.714559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.714672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.714715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.714953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.714985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.715095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.715128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.715239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.715271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.715489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.715521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.715644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.715675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.715860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.715892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-12-11 10:06:36.715995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-12-11 10:06:36.716028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.716150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.716182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.716382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.716417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.716676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.716709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.716904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.716938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.717053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.717087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.717206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.717249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.717589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.717662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.717853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.717890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.718010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.718041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.718174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.718206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.718444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.718477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.718654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.718687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.718885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.718919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.719110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.719143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.719270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.719303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.719489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.719522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.719644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.719677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.719874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.719907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.720024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.720057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.720241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.720286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.720398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.720431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.720537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.720569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.720741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.720773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.720954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.720986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.721159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.721189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.721437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.721470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.721632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.721664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.721771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.721803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.722006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.722038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.722290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.722323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.722495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.722527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.722701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.722733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.722865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.722898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.723094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.723126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.723255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.723288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.723548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.723580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.723684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.723717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.723828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.723859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-12-11 10:06:36.724000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-12-11 10:06:36.724033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.724273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.724307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.724432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.724464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.724569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.724600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.724786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.724819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.724931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.724963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.725136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.725167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.725350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.725382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.725624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.725695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.725839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.725876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.726141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.726175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.726303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.726338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.726521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.726554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.726667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.726700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.726808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.726841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.727082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.727114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.727389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.727424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.727552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.727585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.727782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.727816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.728053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.728087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.728196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.728238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.728362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.728404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.728586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.728619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.728829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.728863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.729041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.729074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.729265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.729298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.729492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.729525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.729650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.729683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.729868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.729901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.730152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.730187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.730369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.730402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.730576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.730609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.730731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.730765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.731033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.731067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.731194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.731237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.731362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.731396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.731577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.731610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.731737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.731769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.731959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-12-11 10:06:36.731993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-12-11 10:06:36.732171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.732205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.732531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.732564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.732758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.732790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.732976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.733009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.733272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.733306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.733542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.733575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.733812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.733845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.734029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.734063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.734259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.734292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.734603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.734675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.734880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.734915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.735095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.735129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.735335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.735370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.735557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.735591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.735854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.735886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.736077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.736109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.736317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.736352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.736608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.736641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.736857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.736889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.737107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.737140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.737342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.737377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.737554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.737586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.737784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.737826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.737952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.737985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.738163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.738195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.738384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.738417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.738599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.738632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.738756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.738788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.739027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.739060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.739239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.739273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.739404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.739436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.739621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.739654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.739840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.739872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.740108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-12-11 10:06:36.740141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-12-11 10:06:36.740379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.740412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.740681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.740714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.740849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.740882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.741073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.741105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.741351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.741385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.741596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.741629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.741801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.741834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.742114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.742146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.742350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.742384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.742560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.742593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.742859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.742892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.743086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.743119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.743254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.743287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.743461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.743493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.743598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.743631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.743864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.743940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.744172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.744208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.744501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.744537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.744669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.744702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.744837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.744872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.745131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.745170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.745429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.745464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.745730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.745765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.745953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.745988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.746270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.746305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.746482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.746515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.746707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.746740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.746874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.746911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.747033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.747065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.747195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.747237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.747431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.747463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.747634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.747667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.747834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.747866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.748128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-12-11 10:06:36.748161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-12-11 10:06:36.748381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.748414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.748602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.748635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.748816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.748849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.748979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.749012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.749266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.749299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.749487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.749519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.749706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.749739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.749935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.749968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.750111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.750152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.750420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.750458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.750657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.750694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.750823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.750859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.750990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.751028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.751215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.751266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.751454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.751493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.751681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.751717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.751927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.751963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.752076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.752112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.752329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.752362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.752500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.752532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.752655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.752688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.752865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.752897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.753164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.753197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.753426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.753459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.753588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.753620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.753736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.753768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.753945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.753977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.754182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.754214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.754349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.754381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.754575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.754607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.754787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.754819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.754933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.754966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.755237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.755270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.755399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.755430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.755609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.755641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.755885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.755918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.756168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.756200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.756404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.756437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.756685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-12-11 10:06:36.756717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-12-11 10:06:36.756901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.756934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.757112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.757145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.757337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.757370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.757487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.757520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.757703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.757734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.757927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.757959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.758079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.758112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.758287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.758319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.758554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.758587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.758849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.758886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.759143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.759175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.759325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.759359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.759477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.759509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.759630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.759662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.759870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.759903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.760166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.760198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.760423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.760458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.760645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.760679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.760796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.760828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.761049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.761081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.761322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.761356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.761553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.761586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.761693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.761724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.761902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.761934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.762113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.762145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.762249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.762282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.762491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.762522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.762791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.762823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.763006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.763039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.763166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.763198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.763328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.763362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.763493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.763526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.763740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.763772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.763962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.763995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.764169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.764201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.764454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.764487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.764751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.764785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.765000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.765033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.765240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-12-11 10:06:36.765274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-12-11 10:06:36.765534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-12-11 10:06:36.765567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-12-11 10:06:36.765755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-12-11 10:06:36.765788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-12-11 10:06:36.765969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-12-11 10:06:36.766001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-12-11 10:06:36.766208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-12-11 10:06:36.766252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-12-11 10:06:36.766435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-12-11 10:06:36.766467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-12-11 10:06:36.766655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-12-11 10:06:36.766686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-12-11 10:06:36.766860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-12-11 10:06:36.766892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-12-11 10:06:36.767103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-12-11 10:06:36.767136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-12-11 10:06:36.767296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-12-11 10:06:36.767330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-12-11 10:06:36.767466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-12-11 10:06:36.767498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-12-11 10:06:36.767671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-12-11 10:06:36.767709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-12-11 10:06:36.767881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-12-11 10:06:36.767913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-12-11 10:06:36.768061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-12-11 10:06:36.768093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-12-11 10:06:36.768265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-12-11 10:06:36.768298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-12-11 10:06:36.768538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-12-11 10:06:36.768571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-12-11 10:06:36.768786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-12-11 10:06:36.768819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-12-11 10:06:36.768937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-12-11 10:06:36.768969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-12-11 10:06:36.769144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-12-11 10:06:36.769180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-12-11 10:06:36.769360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.769394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.575 qpair failed and we were unable to recover it. 00:28:27.575 [2024-12-11 10:06:36.769682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.769714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.575 qpair failed and we were unable to recover it. 00:28:27.575 [2024-12-11 10:06:36.769911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.769942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.575 qpair failed and we were unable to recover it. 00:28:27.575 [2024-12-11 10:06:36.770182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.770215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.575 qpair failed and we were unable to recover it. 00:28:27.575 [2024-12-11 10:06:36.770419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.770453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.575 qpair failed and we were unable to recover it. 00:28:27.575 [2024-12-11 10:06:36.770580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.770612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.575 qpair failed and we were unable to recover it. 00:28:27.575 [2024-12-11 10:06:36.770728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.770761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.575 qpair failed and we were unable to recover it. 00:28:27.575 [2024-12-11 10:06:36.770946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.770978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.575 qpair failed and we were unable to recover it. 00:28:27.575 [2024-12-11 10:06:36.771095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.771127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.575 qpair failed and we were unable to recover it. 00:28:27.575 [2024-12-11 10:06:36.771315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.771348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.575 qpair failed and we were unable to recover it. 00:28:27.575 [2024-12-11 10:06:36.771605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.771636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.575 qpair failed and we were unable to recover it. 00:28:27.575 [2024-12-11 10:06:36.771843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.771876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.575 qpair failed and we were unable to recover it. 00:28:27.575 [2024-12-11 10:06:36.772116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.772147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.575 qpair failed and we were unable to recover it. 00:28:27.575 [2024-12-11 10:06:36.772276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.772310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.575 qpair failed and we were unable to recover it. 00:28:27.575 [2024-12-11 10:06:36.772437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.772470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.575 qpair failed and we were unable to recover it. 00:28:27.575 [2024-12-11 10:06:36.772593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.772625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.575 qpair failed and we were unable to recover it. 00:28:27.575 [2024-12-11 10:06:36.772814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.772846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.575 qpair failed and we were unable to recover it. 00:28:27.575 [2024-12-11 10:06:36.773013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.773045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.575 qpair failed and we were unable to recover it. 00:28:27.575 [2024-12-11 10:06:36.773247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.575 [2024-12-11 10:06:36.773290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.773480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.773512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.773705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.773737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.773934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.773966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.774140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.774173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.774368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.774407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.774593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.774625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.774799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.774831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.775037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.775070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.775351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.775386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.775578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.775610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.775786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.775818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.775947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.775981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.776153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.776186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.776358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.776442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.776756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.776827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.777046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.777084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.777208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.777254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.777501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.777536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.777722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.777756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.777892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.777924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.778041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.778074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.778276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.778310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.778496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.778529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.778645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.778677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.778798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.778829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.778952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.778986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.779117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.779149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.779459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.779494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.779618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.779650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.779773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.779804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.779942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.779976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.780098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.780130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.780299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.780335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.780455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.780489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.780605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.780638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.780833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.780867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.781152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.781185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.781381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.781417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.576 qpair failed and we were unable to recover it. 00:28:27.576 [2024-12-11 10:06:36.781553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.576 [2024-12-11 10:06:36.781586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.781772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.781805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.781986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.782020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.782227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.782261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.782447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.782480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.782665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.782699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.782988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.783020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.783232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.783266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.783385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.783418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.783542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.783575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.783696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.783728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.784008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.784040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.784257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.784293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.784587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.784620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.784803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.784835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.785006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.785049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.785245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.785278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.785468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.785501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.785720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.785752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.786025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.786057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.786303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.786336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.786552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.786584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.786765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.786798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.786923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.786955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.787226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.787261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.787499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.787532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.787745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.787776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.788002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.788034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.788175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.788208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.788410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.788443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.788574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.788607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.788720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.788752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.788976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.789008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.789232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.789266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.789456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.789489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.789683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.789714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.789836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.789869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.790145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.790177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.790297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.577 [2024-12-11 10:06:36.790331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.577 qpair failed and we were unable to recover it. 00:28:27.577 [2024-12-11 10:06:36.790572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.790604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.790797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.790830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.791029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.791062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.791311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.791345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.791623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.791655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.791846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.791878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.792012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.792045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.792311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.792345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.792605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.792638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.792877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.792910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.793123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.793156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.793283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.793316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.793441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.793474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.793674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.793706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.793887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.793919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.794135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.794168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.794363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.794403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.794667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.794699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.794999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.795031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.795237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.795272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.795517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.795549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.795668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.795701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.795885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.795918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.796050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.796083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.796264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.796297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.796468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.796500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.796613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.796645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.796762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.796795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.797031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.797063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.797245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.797279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.797470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.797503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.797704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.797737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.797909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.797941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.798056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.798089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.798190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.798232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.798428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.798461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.798705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.798738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.798979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.799011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.799131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.799163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.578 qpair failed and we were unable to recover it. 00:28:27.578 [2024-12-11 10:06:36.799344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.578 [2024-12-11 10:06:36.799376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.799549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.799581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.799768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.799800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.799938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.799971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.800108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.800141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.800272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.800306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.800416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.800448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.800626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.800658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.800827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.800859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.801095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.801127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.801242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.801275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.801444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.801476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.801661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.801693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.801886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.801918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.802099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.802130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.802390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.802423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.802683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.802715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.802907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.802944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.803151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.803183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.803446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.803482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.803621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.803653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.803834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.803866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.804108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.804140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.804394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.804427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.804550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.804583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.804771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.804803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.805014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.805046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.805238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.805271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.805443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.805475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.805610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.805642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.805834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.805866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.806051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.806084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.806274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.806307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.806415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.579 [2024-12-11 10:06:36.806447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.579 qpair failed and we were unable to recover it. 00:28:27.579 [2024-12-11 10:06:36.806650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.806682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.806874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.806905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.807081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.807112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.807258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.807291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.807478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.807509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.807688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.807720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.808026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.808058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.808263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.808297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.808401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.808433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.808616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.808649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.808774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.808807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.808934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.808966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.809159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.809191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.809506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.809539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.809719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.809752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.809936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.809968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.810145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.810177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.810424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.810458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.810643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.810676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.810793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.810825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.810939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.810972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.811144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.811176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.811391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.811425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.811682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.811720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.811963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.811995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.812286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.812320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.812453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.812486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.812619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.812651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.812861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.812894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.813073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.813105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.813295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.813327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.813444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.813476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.813587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.813619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.813799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.813831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.813948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.813981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.814229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.814262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.814402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.580 [2024-12-11 10:06:36.814433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.580 qpair failed and we were unable to recover it. 00:28:27.580 [2024-12-11 10:06:36.814570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.814602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.814798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.814831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.814953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.814984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.815187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.815226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.815435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.815469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.815706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.815738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.815915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.815947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.816144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.816176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.816385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.816419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.816604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.816636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.816872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.816905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.817009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.817042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.817303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.817337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.817466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.817499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.817761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.817793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.817911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.817942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.818070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.818102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.818235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.818269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.818469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.818501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.818741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.818773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.818909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.818942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.819137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.819168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.819309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.819344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.819472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.819504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.819692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.819724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.819987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.820020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.820260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.820298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.820562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.820594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.820776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.820807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.821043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.821075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.821247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.821280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.821482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.821513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.821637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.821668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.821842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.821873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.822076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.822109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.822310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.822343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.822583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.822615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.822789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.822822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.581 [2024-12-11 10:06:36.823021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.581 [2024-12-11 10:06:36.823052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.581 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.823178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.823210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.823407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.823439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.823633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.823665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.823870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.823901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.824139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.824171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.824367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.824403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.824601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.824632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.824831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.824864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.824998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.825031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.825271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.825304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.825484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.825516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.825703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.825735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.825922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.825954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.826061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.826094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.826365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.826399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.826583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.826615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.826743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.826775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.826961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.826993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.827293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.827326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.827436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.827466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.827639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.827671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.827873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.827905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.828080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.828112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.828280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.828314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.828432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.828463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.828579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.828613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.828749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.828781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.828908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.828952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.829061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.829094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.829299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.829332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.829535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.829567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.829750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.829783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.829992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.830025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.830227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.830261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.830440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.830472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.830666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.830699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.830940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.830973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.831160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.831192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.831471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.582 [2024-12-11 10:06:36.831504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.582 qpair failed and we were unable to recover it. 00:28:27.582 [2024-12-11 10:06:36.831709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.831741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.831926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.831958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.832144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.832176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.832376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.832410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.832590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.832623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.832732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.832764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.832949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.832982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.833230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.833265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.833532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.833565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.833745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.833778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.833892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.833925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.834141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.834173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.834422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.834456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.834575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.834606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.834733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.834765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.835011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.835083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.835339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.835378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.835569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.835603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.835738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.835772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.835974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.836008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.836192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.836248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.836380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.836413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.836597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.836630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.836804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.836838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.836939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.836973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.837082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.837117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.837333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.837368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.837569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.837601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.837785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.837827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.838026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.838058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.838238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.838272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.838448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.838481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.838670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.838703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.838871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.838904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.839200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.839246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.839378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.839411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.839691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.839724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.839911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.839946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.840230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.583 [2024-12-11 10:06:36.840265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.583 qpair failed and we were unable to recover it. 00:28:27.583 [2024-12-11 10:06:36.840383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.840416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.840617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.840650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.840779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.840811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.841016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.841049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.841242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.841276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.841518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.841551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.841721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.841755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.842001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.842034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.842230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.842265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.842451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.842484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.842670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.842703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.842831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.842864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.842971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.843004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.843120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.843153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.843336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.843370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.843564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.843597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.843909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.843978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.844118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.844155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.844303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.844339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.844453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.844486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.844585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.844618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.844860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.844894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.845097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.845131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.845320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.845354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.845620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.845653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.845892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.845925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.846134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.846167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.846349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.846384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.846585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.846619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.846796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.846828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.847074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.847107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.847293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.847328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.847537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.847570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.847821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.847854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.584 [2024-12-11 10:06:36.848096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.584 [2024-12-11 10:06:36.848128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.584 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.848341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.848375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.848562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.848595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.848782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.848816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.849062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.849095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.849283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.849318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.849504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.849537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.849721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.849754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.849949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.849983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.850168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.850207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.850342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.850375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.850510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.850543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.850757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.850791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.850922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.850956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.851159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.851193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.851474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.851507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.851633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.851666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.851927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.851960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.852182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.852215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.852352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.852387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.852562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.852595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.852726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.852758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.852978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.853012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.853234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.853270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.853514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.853547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.853674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.853708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.853952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.853985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.854196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.854239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.854430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.854463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.854751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.854785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.855046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.855079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.855314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.855349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.855542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.855576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.855716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.855749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.855955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.855988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.856130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.585 [2024-12-11 10:06:36.856165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.585 qpair failed and we were unable to recover it. 00:28:27.585 [2024-12-11 10:06:36.856374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.856409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.856594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.856627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.856803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.856836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.857049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.857083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.857352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.857387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.857569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.857603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.857867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.857900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.858085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.858118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.858239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.858273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.858398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.858431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.858636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.858668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.858876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.858910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.859173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.859207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.859400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.859435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.859627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.859666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.859793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.859827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.859959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.859993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.860164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.860197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.860384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.860419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.860545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.860578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.860760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.860794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.860924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.860957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.861196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.861241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.861457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.861490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.861624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.861656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.861919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.861953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.862155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.862188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.862335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.862369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.862492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.862526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.862738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.862772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.862984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.863018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.863187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.863232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.863377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.863410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.863605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.863638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.863811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.863845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.864033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.864066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.864239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.864274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.864495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.864532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.864771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.864804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.586 [2024-12-11 10:06:36.864984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.586 [2024-12-11 10:06:36.865017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.586 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.865214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.865271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.865406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.865446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.865688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.865720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.865846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.865879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.866088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.866121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.866326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.866361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.866539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.866573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.866781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.866815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.866949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.866983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.867175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.867208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.867479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.867513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.867778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.867810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.868085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.868118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.868307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.868342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.868530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.868570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.868753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.868826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.869021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.869057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.869338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.869375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.869628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.869662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.869849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.869882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.870165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.870197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.870331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.870365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.870555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.870588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.870781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.870814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.870958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.870991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.871167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.871199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.871352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.871386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.871647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.871680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.871917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.871965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.872087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.872120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.872360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.872394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.872506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.872539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.872732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.872766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.872941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.872974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.873154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.873187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.873384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.873418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.873610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.873643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.873879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.873912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.874142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.587 [2024-12-11 10:06:36.874175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.587 qpair failed and we were unable to recover it. 00:28:27.587 [2024-12-11 10:06:36.874450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.874486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.874668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.874700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.874871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.874903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.875039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.875072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.875245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.875278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.875464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.875498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.875739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.875772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.875954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.875986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.876242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.876276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.876468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.876502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.876693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.876727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.876851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.876883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.877125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.877157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.877425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.877458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.877591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.877623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.877834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.877867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.878047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.878080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.878255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.878288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.878479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.878512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.878765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.878798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.879075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.879108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.879211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.879254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.879460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.879492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.879681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.879713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.879913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.879946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.880140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.880173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.880454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.880489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.880617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.880649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.880914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.880947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.881119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.881159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.881346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.881380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.881646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.881679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.881898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.881931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.882173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.882204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.882342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.882376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.882508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.882541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.588 qpair failed and we were unable to recover it. 00:28:27.588 [2024-12-11 10:06:36.882776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.588 [2024-12-11 10:06:36.882809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.882932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.882965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.883148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.883181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.883378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.883412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.883672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.883705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.883878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.883910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.884037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.884070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.884249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.884283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.884414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.884446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.884640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.884672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.884854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.884887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.885006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.885038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.885249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.885284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.885392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.885424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.885599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.885633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.885848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.885880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.886063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.886095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.886236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.886270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.886461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.886493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.886677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.886710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.886926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.886960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.887147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.887180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.887431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.887465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.887672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.887705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.887829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.887861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.887982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.888014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.888244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.888278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.888454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.888489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.888694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.888727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.888848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.888880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.889083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.889117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.889298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.889332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.889441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.889475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.889592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.889632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.589 [2024-12-11 10:06:36.889814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.589 [2024-12-11 10:06:36.889847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.589 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.890038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.890073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.890189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.890232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.890422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.890456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.890651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.890683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.890946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.890980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.891157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.891190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.891321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.891356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.892716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.892776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.892982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.893017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.893302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.893336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.894189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.894248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.894444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.894479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.894737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.894770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.895008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.895042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.895215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.895259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.895401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.895435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.895558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.895592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.895721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.895755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.895884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.895918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.896056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.896089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.896274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.896309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.896421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.896455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.896699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.896732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.896998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.897030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.897168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.897202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.897363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.897398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.897516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.897549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.897671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.897704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.897815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.897849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.897954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.897987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.898158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.898192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.898470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.898504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.898718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.898751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.898891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.898923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.899094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.899128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.899243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.899278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.899460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.899493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.899705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.590 [2024-12-11 10:06:36.899739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.590 qpair failed and we were unable to recover it. 00:28:27.590 [2024-12-11 10:06:36.899865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.899905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.900041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.900075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.900355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.900389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.900573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.900606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.900798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.900831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.900946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.900979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.901149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.901182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.901307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.901341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.901460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.901494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.901690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.901723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.901900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.901933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.902062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.902096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.902321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.902355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.902538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.902571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.902728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.902761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.902894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.902926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.903047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.903080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.903196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.903240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.903481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.903513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.903631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.903663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.903846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.903881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.904013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.904047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.904266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.904302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.904424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.904458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.904635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.904668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.904841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.904874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.905056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.905089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.905273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.905307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.905477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.905510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.905685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.905718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.591 qpair failed and we were unable to recover it. 00:28:27.591 [2024-12-11 10:06:36.905836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.591 [2024-12-11 10:06:36.905868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.905976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.906009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.906126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.906160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.906408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.906442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.906584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.906617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.906736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.906769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.906950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.906983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.907120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.907153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.907263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.907296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.907415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.907448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.907625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.907663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.907803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.907836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.907938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.907970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.908080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.908113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.908237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.908272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.908464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.908496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.908609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.908641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.908819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.908852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.909027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.909060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.909167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.909200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.909399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.909432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.909602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.909634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.909806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.909838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.910099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.910132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.910259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.910293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.910479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.910512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.910685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.910718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.910843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.910875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.910988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.911021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.911133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.911165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.911286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.911319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.911517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.911551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.911675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.911708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.911895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.911928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.912180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.912213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.912407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.912440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.912614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.912648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.912832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.912865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.912982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.592 [2024-12-11 10:06:36.913015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.592 qpair failed and we were unable to recover it. 00:28:27.592 [2024-12-11 10:06:36.913210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.913268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.913442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.913474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.913605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.913638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.913813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.913846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.914035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.914067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.914193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.914238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.914443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.914476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.914683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.914715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.914982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.915015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.915187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.915230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.915365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.915397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.915534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.915567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.915701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.915735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.915909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.915942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.916069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.916103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.916287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.916321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.916461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.916494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.916672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.916705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.916832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.916865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.916979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.917012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.917206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.917249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.917508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.917541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.917661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.917694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.917810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.917843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.918016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.918049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.918180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.918213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.918396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.918430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.918651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.918683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.918820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.918853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.919039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.919072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.919188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.919228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.919422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.919454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.919649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.919682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.593 [2024-12-11 10:06:36.919945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.593 [2024-12-11 10:06:36.919978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.593 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.920099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.920133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.920312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.920347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.920452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.920484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.920669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.920703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.920809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.920848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.921084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.921116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.921323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.921358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.921537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.921570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.921684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.921718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.921832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.921865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.922060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.922093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.922263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.922298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.922488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.922520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.922705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.922738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.922926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.922959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.923142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.923175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.923304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.923339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.923443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.923475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.923729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.923763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.924029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.924062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.924267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.924301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.924541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.924574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.924813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.924846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.925026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.925059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.925324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.925362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.925548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.925580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.925697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.925729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.925900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.925933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.926050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.926083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.926188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.926230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.926405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.926438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.926633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.926667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.926861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.926894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.927081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.927113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.927235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.927269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.927452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.927484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.927592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.927624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.927859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.927893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.594 [2024-12-11 10:06:36.927996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.594 [2024-12-11 10:06:36.928028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.594 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.928211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.928257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.928477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.928510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.928637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.928670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.928804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.928837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.929120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.929153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.929436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.929477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.929659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.929692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.929820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.929853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.930045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.930077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.930253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.930286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.930391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.930424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.930673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.930706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.930893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.930925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.931106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.931138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.931246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.931279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.931452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.931485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.931727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.931761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.931889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.931922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.932131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.932164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.932369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.932403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.932593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.932625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.932798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.932831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.932949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.932982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.933250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.933285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.933467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.933499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.933667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.933699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.933810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.933843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.934124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.934156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.934299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.934333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.934507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.934540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.934660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.934693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.934958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.934991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.935185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.935228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.935402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.935435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.935618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.935651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.935863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.935896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.936066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.595 [2024-12-11 10:06:36.936099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.595 qpair failed and we were unable to recover it. 00:28:27.595 [2024-12-11 10:06:36.936359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.936392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.936567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.936600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.936720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.936753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.937002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.937035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.937312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.937346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.937484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.937517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.937706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.937739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.938002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.938035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.938150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.938188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.938417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.938451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.938700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.938732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.938909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.938941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.939074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.939108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.939376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.939410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.939655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.939687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.939862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.939895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.940075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.940108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.940282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.940315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.940581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.940613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.940737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.940769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.940960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.940992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.941168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.941201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.941399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.941432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.941560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.941592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.941784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.941817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.941994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.942027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.942265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.942298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.942540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.942572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.942795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.942828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.943027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.943059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.943264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.943297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.943467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.943499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.943610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.943642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.943884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.943917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.944090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.944123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.944337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.944371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.944559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.944591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.944763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.944796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.945037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.596 [2024-12-11 10:06:36.945070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.596 qpair failed and we were unable to recover it. 00:28:27.596 [2024-12-11 10:06:36.945259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.945294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.945465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.945499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.945684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.945717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.945901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.945934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.946037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.946070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.946176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.946208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.946485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.946519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.946709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.946742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.946935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.946967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.947083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.947122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.947257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.947290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.947556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.947589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.947805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.947838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.948029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.948061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.948242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.948280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.948410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.948442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.948567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.948598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.948724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.948756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.948994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.949027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.949210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.949252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.949512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.949546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.949750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.949784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.949969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.950001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.950193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.950235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.950441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.950475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.950754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.950786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.950972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.951004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.951241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.951275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.951464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.951497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.951695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.951729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.951991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.952025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.952211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.952254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.952442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.952475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.952604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.952637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.952777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.952810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.952914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.952947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.953129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.953161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.953374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.953409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.953582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.597 [2024-12-11 10:06:36.953615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.597 qpair failed and we were unable to recover it. 00:28:27.597 [2024-12-11 10:06:36.953741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.953774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.953958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.953990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.954238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.954273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.954443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.954476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.954666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.954698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.954866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.954898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.955088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.955121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.955313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.955346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.955587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.955620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.955749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.955782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.955971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.956009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.956273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.956306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.956479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.956513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.956687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.956719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.956961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.956993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.957234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.957268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.957378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.957410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.957624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.957657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.957893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.957927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.958196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.958236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.958418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.958451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.958569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.958601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.958726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.958759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.958995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.959028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.959293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.959326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.959498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.959532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.959802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.959835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.960012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.960045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.960158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.960191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.960373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.960407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.960524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.960557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.960698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.960730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.960911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.960945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.961143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.961176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.598 qpair failed and we were unable to recover it. 00:28:27.598 [2024-12-11 10:06:36.961318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.598 [2024-12-11 10:06:36.961355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.961458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.961492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.961618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.961651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.961777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.961811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.962002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.962035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.962203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.962246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.962434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.962467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.962647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.962680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.962806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.962839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.963128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.963162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.963431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.963465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.963673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.963705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.963884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.963916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.964107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.964140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.964273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.964306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.964495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.964526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.964652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.964690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.964805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.964836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.965023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.965056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.965244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.965279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.965400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.965433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.965570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.965603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.965786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.965819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.965930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.965963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.966208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.966248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.966376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.966409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.966540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.966574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.966785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.966818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.966959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.966993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.967235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.967269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.967390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.967423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.967544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.967577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.967755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.967788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.968037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.968069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.968273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.968330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.968580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.968613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.968793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.968826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.969075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.969107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.969253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.969287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.599 [2024-12-11 10:06:36.969504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.599 [2024-12-11 10:06:36.969538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.599 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.969732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.969765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.969957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.969990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.970230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.970263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.970385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.970419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.970683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.970715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.970845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.970877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.971002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.971035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.971167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.971199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.971411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.971444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.971635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.971667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.971773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.971806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.971995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.972029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.972200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.972243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.972416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.972449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.972646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.972679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.972801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.972834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.972959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.972998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.973123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.973156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.973287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.973320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.973441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.973474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.973754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.973787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.974027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.974060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.974251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.974285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.974464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.974497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.974709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.974742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.974875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.974909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.975089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.975123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.975299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.975334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.975506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.975539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.975659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.975691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.975870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.975903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.976019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.976052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.976234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.976269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.976541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.976574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.976704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.976736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.976855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.976887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.977028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.977061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.977306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.977344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.600 [2024-12-11 10:06:36.977551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.600 [2024-12-11 10:06:36.977585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.600 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.977752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.977785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.977967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.977999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.978232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.978266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.978405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.978437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.978625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.978696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.978832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.978867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.979040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.979074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.979315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.979350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.979613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.979646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.979766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.979800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.979937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.979970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.980092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.980125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.980243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.980278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.980389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.980422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.980669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.980701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.980893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.980926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.981163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.981196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.981319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.981350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.981477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.981510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.981722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.981755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.981923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.981956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.982070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.982103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.982281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.982315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.982567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.982600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.982861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.982894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.983031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.983064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.983335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.983368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.983567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.983600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.983841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.983875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.984000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.984034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.984205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.984248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.984374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.984417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.984603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.984635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.984813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.984847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.984983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.985016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.985204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.985246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.985506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.985539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.985657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.985690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.985910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.601 [2024-12-11 10:06:36.985942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.601 qpair failed and we were unable to recover it. 00:28:27.601 [2024-12-11 10:06:36.986132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.986164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.986295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.986329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.986523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.986556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.986742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.986775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.986974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.987007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.987181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.987215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.987349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.987382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.987568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.987607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.987789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.987821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.988019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.988057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.988241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.988276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.988410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.988443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.988648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.988683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.988875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.988907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.989107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.989141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.989323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.989357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.989474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.989506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.989700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.989732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.989929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.989960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.990133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.990172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.990305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.990338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.990467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.990500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.990758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.990790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.991050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.991083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.991255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.991288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.991470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.991503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.991680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.991714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.991910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.991944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.992119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.992152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.992270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.992304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.992425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.992459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.992646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.992682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.992811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.992844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.993060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.993094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.993306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.602 [2024-12-11 10:06:36.993341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.602 qpair failed and we were unable to recover it. 00:28:27.602 [2024-12-11 10:06:36.993479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.993511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.993699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.993731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.993968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.994002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.994188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.994228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.994412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.994444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.994623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.994655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.994831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.994862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.995126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.995159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.995359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.995392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.995572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.995604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.995786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.995818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.996028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.996062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.996291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.996327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.996451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.996485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.996741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.996777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.996976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.997017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.997255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.997291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.997464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.997499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.997776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.997815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.997953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.997987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.998177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.998212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.998359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.998393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.998584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.998618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.998862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.998897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.999079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.999115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.999232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.999274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.999461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.999498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.999674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.999711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:36.999904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:36.999940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:37.000122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:37.000159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:37.000446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:37.000483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:37.000618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:37.000654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:37.000830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:37.000863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:37.001040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:37.001074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:37.001286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:37.001323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:37.001466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:37.001505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:37.001704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:37.001739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:37.001982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:37.002022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:37.002152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:37.002186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.603 [2024-12-11 10:06:37.002468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.603 [2024-12-11 10:06:37.002512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.603 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.002722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.002756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.002940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.002977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.003165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.003204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.003423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.003458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.003707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.003751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.003892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.003927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.004190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.004236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.004424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.004460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.004564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.004603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.004808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.004841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.004969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.005012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.005150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.005184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.005529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.005615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.005848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.005884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.006069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.006102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.006244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.006279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.006543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.006576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.006824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.006856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.007113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.007145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.007331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.007366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.007495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.007527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.007733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.007765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.007977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.008009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.008269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.008303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.008541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.008575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.008692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.008724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.008973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.009007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.009121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.009154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.009357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.009390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.009560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.009592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.009697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.009729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.009929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.009962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.010152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.010185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.010486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.010527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.010750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.010820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.011027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.011064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.011307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.011343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.604 [2024-12-11 10:06:37.011484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.604 [2024-12-11 10:06:37.011517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.604 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.011711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.011744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.011921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.011963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.012137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.012169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.012424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.012458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.012627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.012660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.012919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.012952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.013070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.013102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.013232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.013266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.013455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.013488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.013592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.013626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.013796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.013830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.013938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.013972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.014155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.014189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.014325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.014359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.014498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.014531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.014707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.014741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.014945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.014978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.015162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.015195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.015445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.015479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.015651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.015685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.015811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.015845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.016134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.016168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.016349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.016384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.016560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.016594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.016768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.016801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.017056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.017090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.017231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.017266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.017447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.017480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.017731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.017765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.017886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.017920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.018089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.018123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.018250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.018284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.018457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.018490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.018684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.018717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.018973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.019007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.019254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.019288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.019491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.605 [2024-12-11 10:06:37.019524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.605 qpair failed and we were unable to recover it. 00:28:27.605 [2024-12-11 10:06:37.019707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.019741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.019979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.020012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.020125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.020158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.020279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.020312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.020515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.020554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.020667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.020700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.020881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.020914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.021107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.021140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.021382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.021416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.021614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.021649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.021891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.021924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.022134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.022167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.022359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.022394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.022589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.022622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.022886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.022919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.023159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.023192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.023321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.023355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.023612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.023646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.023858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.023892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.024097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.024130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.024271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.024304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.024416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.024449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.024626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.024659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.024847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.024881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.024991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.025025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.025135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.025167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.025419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.025453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.025623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.025656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.025893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.025926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.026122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.026156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.026293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.026327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.026585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.026619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.026794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.026827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.027102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.027135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.027324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.027358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.027560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.027593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.027715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.027747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.606 qpair failed and we were unable to recover it. 00:28:27.606 [2024-12-11 10:06:37.027880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.606 [2024-12-11 10:06:37.027914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.028187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.028229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.028430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.028464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.028656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.028689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.028875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.028909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.029109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.029142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.029326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.029360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.029544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.029583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.029719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.029753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.029870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.029902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.030015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.030049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.030170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.030204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.030472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.030506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.030619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.030652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.030837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.030870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.031054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.031088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.031281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.031314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.031488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.031522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.031712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.031745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.031997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.032030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.032225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.032260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.032387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.032420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.032540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.032574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.032744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.032778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.032893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.032926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.033216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.033280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.033420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.033453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.033575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.033609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.607 qpair failed and we were unable to recover it. 00:28:27.607 [2024-12-11 10:06:37.033791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.607 [2024-12-11 10:06:37.033825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.034017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.034050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.034165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.034199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.034377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.034410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.034592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.034625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.034807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.034841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.035034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.035067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.035176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.035209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.035334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.035368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.035608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.035641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.035910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.035943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.036184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.036228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.036360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.036394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.036649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.036682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.036811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.036844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.036964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.036998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.037186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.037247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.037514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.037547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.037719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.037753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.037994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.038033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.038156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.038190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.038436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.038506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.038752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.038787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.039017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.039051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.039241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.039276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.039397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.039430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.039626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.039659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.039919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.039952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.040158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.040191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.040390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.040424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.040538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.040571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.040721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.040754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.040998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.041033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.041172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.041205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.041422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.041456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.041709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.041743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.041923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.041956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.042076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.042109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.042377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.042411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.608 qpair failed and we were unable to recover it. 00:28:27.608 [2024-12-11 10:06:37.042544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.608 [2024-12-11 10:06:37.042577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.042761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.042793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.042971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.043004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.043185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.043226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.043407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.043440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.043610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.043643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.043908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.043943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.044079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.044113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.044356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.044390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.044521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.044554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.044671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.044703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.044826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.044858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.045063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.045095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.045234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.045269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.045456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.045488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.045596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.045630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.045738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.045771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.045894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.045927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.046039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.046072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.046258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.046293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.046403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.046441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.046553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.046587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.046771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.046804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.046979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.047012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.047132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.047165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.047297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.047331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.047532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.047564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.047693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.047725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.047852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.047885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.048069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.048102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.048235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.048283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.048484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.048516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.048727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.048759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.048878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.048910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.049056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.049088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.049193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.049242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.049372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.049405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.049647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.049680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.609 [2024-12-11 10:06:37.049796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.609 [2024-12-11 10:06:37.049829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.609 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.050032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.050065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.050174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.050206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.050353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.050386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.050590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.050622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.050743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.050776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.050910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.050943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.051134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.051167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.051315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.051349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.051534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.051567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.051674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.051705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.051824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.051857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.052039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.052072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.052242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.052276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.052393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.052425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.052529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.052562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.052747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.052780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.052983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.053016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.053287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.053320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.053568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.053600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.053818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.053850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.053989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.054022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.054230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.054272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.054463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.054496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.054676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.054708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.054947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.054980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.055172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.055204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.055412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.055447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.055619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.055653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.055783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.055816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.055928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.055963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.056171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.056204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.056385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.056418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.058321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.058380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.058624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.058658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.058928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.058962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.610 [2024-12-11 10:06:37.059213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.610 [2024-12-11 10:06:37.059263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.610 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.059385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.059417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.059598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.059631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.059769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.059802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.060029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.060063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.060251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.060285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.060399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.060433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.060612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.060645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.060836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.060870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.061063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.061097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.061229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.061264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.061478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.061511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.061618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.061652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.061850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.061884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.062071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.062105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.062292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.062326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.062445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.062478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.062593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.062627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.062741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.062774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.062947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.062981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.063116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.063149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.063272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.063307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.063424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.063457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.063643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.063676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.063803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.063836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.064020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.064054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.064168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.064209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.064480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.064513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.064653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.064686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.064813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.064847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.065054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.065087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.065286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.065321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.065452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.065485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.065606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.065639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.065834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.065868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.611 [2024-12-11 10:06:37.066075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.611 [2024-12-11 10:06:37.066108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.611 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.066211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.066259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.066390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.066422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.066557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.066590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.066698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.066731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.066861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.066894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.067103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.067136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.067265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.067299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.067486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.067520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.067713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.067747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.067930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.067964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.068236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.068271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.068457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.068490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.068596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.068631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.068813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.068848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.069075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.069108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.069288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.069323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.069446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.069480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.069587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.069620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.069747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.069780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.069987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.070020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.070259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.070292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.070416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.070450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.070564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.070597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.070720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.070754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.072727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.072785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.073019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.073055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.073324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.073359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.073532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.073564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.073756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.073788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.073919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.073951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.074092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.074132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.074320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.074353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.074469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.074501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.074683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.074715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.074981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.612 [2024-12-11 10:06:37.075016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.612 qpair failed and we were unable to recover it. 00:28:27.612 [2024-12-11 10:06:37.075190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.075234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.075423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.075452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.075570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.075599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.075777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.075810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.075982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.076015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.076204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.076245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.076440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.076470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.076590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.076619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.076804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.076834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.077030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.077060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.077172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.077202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.077447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.077479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.077585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.077618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.077736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.077769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.077903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.077936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.078203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.078245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.078443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.078477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.078714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.078746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.078919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.078951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.079149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.079182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.079464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.079495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.079686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.079716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.079849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.079880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.079991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.080019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.080262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.080298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.081978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.082031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.082161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.082191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.082312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.082342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.082600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.082634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.082771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.082805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.082995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.083028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.083229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.083260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.083375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.083405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.083531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.083561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.083741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.083772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.083872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.083907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.613 [2024-12-11 10:06:37.084031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.613 [2024-12-11 10:06:37.084065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.613 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.084260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.084295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.084526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.084559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.084803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.084837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.085052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.085085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.085328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.085363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.085547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.085579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.085760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.085793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.085915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.085947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.086064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.086097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.086212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.086253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.086368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.086400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.086521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.086554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.086748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.086781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.086954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.086987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.087120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.087153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.087272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.087306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.087495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.087529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.087715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.087748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.087998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.088030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.088204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.088252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.088366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.088398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.088513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.088547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.088678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.088712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.088818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.088851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.088972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.089005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.089242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.089317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.089610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.089648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.089784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.089818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.090056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.090090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.090287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.090322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.090471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.090505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.090625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.090657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.090778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.090811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.090922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.090954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.091231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.091266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.091393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.091427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.091688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.091721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.093108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.093162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.614 [2024-12-11 10:06:37.093457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.614 [2024-12-11 10:06:37.093504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.614 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.093622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.093656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.093918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.093952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.094130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.094164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.094358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.094393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.094524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.094556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.094668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.094702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.094892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.094924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.095109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.095142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.095346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.095382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.095594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.095627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.095742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.095774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.095963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.095996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.096105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.096139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.096335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.096369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.096494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.096528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.096701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.096734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.096923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.096956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.097066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.097100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.097239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.097274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.097515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.097549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.097733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.097766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.097952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.097985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.098108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.098140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.098325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.098359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.098602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.098635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.098877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.098909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.099123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.099158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.099356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.099390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.099512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.099545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.099719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.099751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.099995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.100027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.100210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.100256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.100396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.100429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.100612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.100645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.100775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.100809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.100918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.100951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.101081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.101114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.101283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.615 [2024-12-11 10:06:37.101318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.615 qpair failed and we were unable to recover it. 00:28:27.615 [2024-12-11 10:06:37.101500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.101533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.101772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.101811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.101925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.101958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.102105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.102140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.102260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.102294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.102407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.102441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.102617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.102650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.102772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.102805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.103006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.103039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.103150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.103183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.103384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.103418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.103551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.103585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.103701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.103733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.103862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.103895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.104074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.104107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.105516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.105572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.106168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.106212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.106513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.106548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.106724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.106759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.106999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.107032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.107164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.107198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.107332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.107367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.107473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.107505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.107684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.107718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.107831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.107865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.107974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.108007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.108187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.108232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.108410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.108443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.108695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.108766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.108918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.108956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.109135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.109169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.109372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.109407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.109676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.616 [2024-12-11 10:06:37.109709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.616 qpair failed and we were unable to recover it. 00:28:27.616 [2024-12-11 10:06:37.109827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.109860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.109979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.110012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.110133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.110166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.110421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.110455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.110585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.110618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.110858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.110891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.111089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.111130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.111262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.111297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.111534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.111568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.111763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.111796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.111932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.111965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.112071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.112104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.112251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.112286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.112414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.112446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.112628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.112661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.112780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.112813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.112986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.113018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.113261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.113295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.113408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.113442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.113563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.113596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.113702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.113734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.113889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.113958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.114177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.114227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.114416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.114449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.114567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.114600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.114718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.114756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.114956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.114990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.115162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.115195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.115330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.115365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.115485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.115519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.115642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.115675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.115940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.115973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.116091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.116125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.116314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.116348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.116469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.116502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.116672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.116706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.116951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.617 [2024-12-11 10:06:37.116985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.617 qpair failed and we were unable to recover it. 00:28:27.617 [2024-12-11 10:06:37.117121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.117154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.117269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.117304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.117486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.117520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.117624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.117658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.117787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.117820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.118063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.118097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.118271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.118306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.118444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.118476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.118655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.118689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.118859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.118893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.119085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.119118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.119268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.119303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.119425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.119460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.119574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.119608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.119796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.119830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.119960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.119995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.120170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.120203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.120317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.120351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.120640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.120673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.120811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.120845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.121085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.121119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.121270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.121303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.121485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.121519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.121643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.121677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.121879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.121912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.122096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.122135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.122264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.122298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.122418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.122451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.122665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.122698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.122880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.122914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.123037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.123070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.123282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.123316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.123423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.123458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.123639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.123672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.123851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.123883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.124053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.124087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.124281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.124315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.124487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.124520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.618 [2024-12-11 10:06:37.124651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.618 [2024-12-11 10:06:37.124684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.618 qpair failed and we were unable to recover it. 00:28:27.619 [2024-12-11 10:06:37.124873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.619 [2024-12-11 10:06:37.124906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.619 qpair failed and we were unable to recover it. 00:28:27.619 [2024-12-11 10:06:37.125050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.619 [2024-12-11 10:06:37.125084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.619 qpair failed and we were unable to recover it. 00:28:27.619 [2024-12-11 10:06:37.125199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.619 [2024-12-11 10:06:37.125241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.619 qpair failed and we were unable to recover it. 00:28:27.619 [2024-12-11 10:06:37.125368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.619 [2024-12-11 10:06:37.125401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.619 qpair failed and we were unable to recover it. 00:28:27.619 [2024-12-11 10:06:37.125588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.619 [2024-12-11 10:06:37.125622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.619 qpair failed and we were unable to recover it. 00:28:27.619 [2024-12-11 10:06:37.125751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.619 [2024-12-11 10:06:37.125785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.619 qpair failed and we were unable to recover it. 00:28:27.619 [2024-12-11 10:06:37.125897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.619 [2024-12-11 10:06:37.125930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.619 qpair failed and we were unable to recover it. 00:28:27.619 [2024-12-11 10:06:37.126134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.619 [2024-12-11 10:06:37.126168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.619 qpair failed and we were unable to recover it. 00:28:27.619 [2024-12-11 10:06:37.126293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.619 [2024-12-11 10:06:37.126328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.619 qpair failed and we were unable to recover it. 00:28:27.619 [2024-12-11 10:06:37.126445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.619 [2024-12-11 10:06:37.126478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.619 qpair failed and we were unable to recover it. 00:28:27.899 [2024-12-11 10:06:37.126672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.899 [2024-12-11 10:06:37.126705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.899 qpair failed and we were unable to recover it. 00:28:27.899 [2024-12-11 10:06:37.126883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.899 [2024-12-11 10:06:37.126916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.899 qpair failed and we were unable to recover it. 00:28:27.899 [2024-12-11 10:06:37.127033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.899 [2024-12-11 10:06:37.127067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.899 qpair failed and we were unable to recover it. 00:28:27.899 [2024-12-11 10:06:37.127253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.899 [2024-12-11 10:06:37.127289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.899 qpair failed and we were unable to recover it. 00:28:27.899 [2024-12-11 10:06:37.127394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.899 [2024-12-11 10:06:37.127428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.899 qpair failed and we were unable to recover it. 00:28:27.899 [2024-12-11 10:06:37.127609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.899 [2024-12-11 10:06:37.127642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.899 qpair failed and we were unable to recover it. 00:28:27.899 [2024-12-11 10:06:37.127880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.899 [2024-12-11 10:06:37.127914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.899 qpair failed and we were unable to recover it. 00:28:27.899 [2024-12-11 10:06:37.128036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.899 [2024-12-11 10:06:37.128071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.899 qpair failed and we were unable to recover it. 00:28:27.899 [2024-12-11 10:06:37.128254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.899 [2024-12-11 10:06:37.128288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.899 qpair failed and we were unable to recover it. 00:28:27.899 [2024-12-11 10:06:37.128416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.899 [2024-12-11 10:06:37.128449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.899 qpair failed and we were unable to recover it. 00:28:27.899 [2024-12-11 10:06:37.128568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.899 [2024-12-11 10:06:37.128602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.899 qpair failed and we were unable to recover it. 00:28:27.899 [2024-12-11 10:06:37.128734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.899 [2024-12-11 10:06:37.128767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.899 qpair failed and we were unable to recover it. 00:28:27.899 [2024-12-11 10:06:37.128888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.899 [2024-12-11 10:06:37.128921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.899 qpair failed and we were unable to recover it. 00:28:27.899 [2024-12-11 10:06:37.129095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.899 [2024-12-11 10:06:37.129129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.899 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.129313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.129348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.129471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.129504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.129744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.129784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.129903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.129936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.130114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.130148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.130284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.130319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.130501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.130534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.130653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.130686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.130804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.130838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.131013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.131046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.131238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.131274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.131398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.131432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.131622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.131656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.131829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.131863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.131988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.132022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.132166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.132200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.132330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.132363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.132480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.132514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.132624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.132657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.132842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.132876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.133054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.133088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.133203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.133246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.133377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.133410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.133532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.133565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.135351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.135410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.135551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.135585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.135702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.135733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.135935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.135980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.136166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.136197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.136478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.136536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.136660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.136694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.136882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.136915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.137054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.137087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.137285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.137322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.137445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.137480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.137604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.137638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.137741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.137775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.137990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.138023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.900 [2024-12-11 10:06:37.138216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.900 [2024-12-11 10:06:37.138271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.900 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.138389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.138422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.138595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.138627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.138742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.138774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.138913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.138946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.139139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.139172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.139283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.139318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.139535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.139569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.139687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.139719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.139958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.139991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.140169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.140202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.140359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.140392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.140515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.140549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.140754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.140787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.140897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.140930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.141125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.141158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.141432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.141466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.141666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.141698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.141875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.141913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.142089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.142121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.142238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.142273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.142447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.142479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.142656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.142688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.142810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.142843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.142975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.143008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.143122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.143154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.143330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.143364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.143549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.143581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.143760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.143793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.143909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.143941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.144060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.144092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.144489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.144530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.144734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.144767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.144960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.144993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.145104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.145136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.145257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.145292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.145472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.145505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.145620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.145652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.145774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.145808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.146055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.146089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.901 qpair failed and we were unable to recover it. 00:28:27.901 [2024-12-11 10:06:37.146332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.901 [2024-12-11 10:06:37.146366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.146474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.146506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.146686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.146719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.146847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.146880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.147071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.147105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.147330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.147369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.147548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.147580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.147694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.147726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.147844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.147878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.148062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.148095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.148279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.148313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.148496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.148528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.148632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.148664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.148779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.148812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.149060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.149093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.149286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.149320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.149441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.149474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.149590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.149635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.149750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.149785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.149949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.150021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.150265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.150338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.150472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.150508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.150684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.150718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.150839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.150871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.151043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.151076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.151191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.151235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.151364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.151397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.151574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.151608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.151801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.151835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.152079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.152113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.152242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.152278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.152466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.152500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.152694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.152737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.152848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.152881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.153126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.153159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.153290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.153323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.153568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.153601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.902 qpair failed and we were unable to recover it. 00:28:27.902 [2024-12-11 10:06:37.153787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.902 [2024-12-11 10:06:37.153820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.154047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.154081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.154204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.154250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.154510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.154543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.154674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.154707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.154948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.154981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.155198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.155245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.155360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.155394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.155583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.155616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.155742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.155774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.155902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.155936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.156063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.156097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.156299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.156333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.156505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.156538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.156732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.156765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.156891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.156924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.157122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.157155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.157363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.157397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.157586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.157619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.158973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.159025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.159162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.159198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.159332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.159367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.160659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.160710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.160845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.160880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.161054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.161087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.161231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.161266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.161528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.161561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.161683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.161716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.161957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.161990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.162106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.162139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.162253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.162288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.162471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.162504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.162715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.162749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.162857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.162892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.163000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.163033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.163204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.903 [2024-12-11 10:06:37.163255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.903 qpair failed and we were unable to recover it. 00:28:27.903 [2024-12-11 10:06:37.163430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.163462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.163591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.163624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.163863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.163896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.164077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.164111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.164236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.164270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.164384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.164417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.164533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.164567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.164701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.164734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.164941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.164973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.165153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.165186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.165371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.165405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.165543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.165576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.165699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.165733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.165850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.165883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.165993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.166025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.166149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.166179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.166377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.166409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.166519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.166550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.166781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.166814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.166929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.166962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.167082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.167115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.167238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.167272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.167454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.167487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.167672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.167704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.167888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.167918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.168093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.168123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.168302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.168334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.168527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.168556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.168737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.168783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.904 qpair failed and we were unable to recover it. 00:28:27.904 [2024-12-11 10:06:37.168964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.904 [2024-12-11 10:06:37.168998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.169116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.169148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.169352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.169383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.169484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.169512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.169624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.169651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.169761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.169791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.169973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.170007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.170128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.170162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.170303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.170338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.170460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.170493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.170608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.170643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.170771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.170802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.170926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.170956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.171149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.171179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.171376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.171421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.171609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.171642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.171756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.171789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.171911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.171945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.172147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.172180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.172320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.172354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.172468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.172498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.172620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.172650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.172778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.172807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.172920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.172951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.173074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.173105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.173212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.173254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.173453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.173485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.173603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.173635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.173815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.173847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.174038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.174071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.174210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.174287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.174410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.174456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.174571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.174601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.174774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.174804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.174914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.905 [2024-12-11 10:06:37.174944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.905 qpair failed and we were unable to recover it. 00:28:27.905 [2024-12-11 10:06:37.175055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.175086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.175190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.175231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.175413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.175449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.175626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.175655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.175822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.175853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.175951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.175981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.176097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.176126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.176314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.176358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.176529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.176558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.176654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.176682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.176799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.176827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.176988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.177017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.177199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.177232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.177341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.177368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.178877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.178927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.179067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.179097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.179271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.179317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.179497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.179531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.179701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.179735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.179969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.179998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.180165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.180197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.180330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.180373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.180556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.180584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.180685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.180714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.180823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.180851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.181043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.181071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.181307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.181337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.181512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.181539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.181736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.181769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.181914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.181948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.182136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.182168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.182388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.182424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.182611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.182644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.182892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.182920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.183024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.183051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.906 [2024-12-11 10:06:37.183231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.906 [2024-12-11 10:06:37.183261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.906 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.183388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.183416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.183598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.183631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.183751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.183785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.183954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.183988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.184168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.184201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.184398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.184435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.184557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.184596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.184718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.184746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.184986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.185019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.185211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.185258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.185500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.185533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.185772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.185806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.185932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.185966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.186110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.186143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.186414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.186451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.186560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.186593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.186766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.186799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.186905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.186938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.187152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.187186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.187460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.187495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.187617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.187650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.187840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.187873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.188001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.188034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.188149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.188183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.189500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.189553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.189765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.189800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.190007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.190041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.190248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.190282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.190403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.190436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.190613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.190646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.190765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.190798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.190933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.190966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.191164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.191198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.907 [2024-12-11 10:06:37.191405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.907 [2024-12-11 10:06:37.191439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.907 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.191629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.191663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.191772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.191806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.191924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.191956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.192140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.192174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.192298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.192332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.192505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.192538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.192712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.192744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.192848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.192882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.192986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.193019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.193202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.193244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.193369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.193403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.193604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.193637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.193773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.193811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.193989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.194022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.194210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.194268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.194401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.194434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.194567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.194600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.194769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.194801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.195078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.195111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.195241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.195278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.195404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.195437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.195633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.195666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.195875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.195908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.196031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.196065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.196198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.196240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.196433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.196466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.196609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.196642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.196771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.196806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.196906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.196939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.197050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.197084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.197201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.197241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.197358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.197393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.197516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.197548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.197658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.197691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.908 [2024-12-11 10:06:37.197796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.908 [2024-12-11 10:06:37.197828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.908 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.197943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.197975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.198151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.198184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.198386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.198424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.198600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.198634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.198766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.198799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.198911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.198943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.199057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.199089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.199195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.199237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.199364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.199399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.199505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.199538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.199739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.199773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.199949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.199981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.200169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.200201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.200386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.200418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.200594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.200627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.200755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.200788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.200919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.200951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.201059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.201098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.202421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.202473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.202678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.202713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.202856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.202890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.203028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.203061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.203192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.203299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.203432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.203464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.203701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.203734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.203930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.203963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.204150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.204184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.204308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.204341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.204551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.204582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.205870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.205920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.909 qpair failed and we were unable to recover it. 00:28:27.909 [2024-12-11 10:06:37.206136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.909 [2024-12-11 10:06:37.206169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.206460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.206495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.206610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.206642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.206850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.206878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.207073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.207101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.207209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.207256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.207430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.207458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.207645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.207672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.207856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.207883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.208004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.208031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.208140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.208168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.208355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.208384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.208496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.208524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.209676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.209722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.210021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.210050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.210170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.210197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.210396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.210424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.210609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.210636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.210743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.210768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.210976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.211009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.211189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.211233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.211420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.211454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.211576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.211604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.211707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.211735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.211965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.211993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.212160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.212189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.212299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.212327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.212442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.212475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.212588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.212615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.212822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.212850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.213019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.213048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.213241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.213270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.213527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.910 [2024-12-11 10:06:37.213556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.910 qpair failed and we were unable to recover it. 00:28:27.910 [2024-12-11 10:06:37.213664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.213692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.213806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.213834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.213948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.213973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.214076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.214101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.214198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.214231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.214463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.214490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.214720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.214746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.214861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.214886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.215056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.215084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.215190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.215272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.215455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.215483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.215573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.215598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.215824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.215853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.216035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.216064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.216303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.216332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.216442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.216467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.216646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.216674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.216853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.216880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.216982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.217007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.217118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.217147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.217242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.217269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.217381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.217406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.217572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.217600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.217723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.217749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.217865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.217892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.217998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.218023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.218133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.218161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.218385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.218415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.218513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.218538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.218643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.218669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.218839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.218867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.219049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.219077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.219280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.219311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.219484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.219512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.219676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.219708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.911 [2024-12-11 10:06:37.219817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.911 [2024-12-11 10:06:37.219844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.911 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.220028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.220056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.220235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.220264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.220385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.220412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.220597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.220624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.220786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.220814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.220920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.220948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.221113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.221140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.221258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.221286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.221449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.221476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.221647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.221674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.221797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.221824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.222038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.222066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.222181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.222208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.222393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.222420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.222520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.222547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.222744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.222773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.223026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.223053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.223245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.223273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.223378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.223405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.223522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.223551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.223656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.223681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.223793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.223821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.223917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.223942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.224100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.224129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.224323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.224352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.224485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.224512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.224638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.224665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.224846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.224874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.225049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.225076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.225187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.225212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.225321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.225348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.225510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.225537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.225719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.225747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.225975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.912 [2024-12-11 10:06:37.226005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.912 qpair failed and we were unable to recover it. 00:28:27.912 [2024-12-11 10:06:37.226113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.226138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.226342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.226371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.226480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.226505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.226600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.226625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.226800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.226834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.226941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.226971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.227145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.227181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.227361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.227392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.227562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.227590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.227692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.227721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.228009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.228040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.228225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.228256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.228429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.228459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.228578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.228608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.228797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.228827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.229003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.229034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.229152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.229182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.229378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.229408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.229530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.229558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.229772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.229804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.229924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.229954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.230118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.230148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.230323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.230355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.230486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.230517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.230778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.230809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.231010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.231040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.231153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.231182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.231359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.231390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.231502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.231530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.231688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.231718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.231932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.231962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.232134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.232164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.232348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.232378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.232563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.232593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.232832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.913 [2024-12-11 10:06:37.232862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.913 qpair failed and we were unable to recover it. 00:28:27.913 [2024-12-11 10:06:37.233117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.233147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.233406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.233438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.233606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.233637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.233868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.233897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.234131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.234161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.234375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.234406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.234535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.234565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.234800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.234830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.234958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.234989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.235104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.235139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.235267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.235298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.235485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.235515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.235740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.235771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.235993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.236023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.236258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.236290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.236461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.236494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.236713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.236746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.236955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.236988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.237282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.237318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.237445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.237478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.237657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.237690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.237828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.237861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.238083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.238116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.238262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.238296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.238441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.238483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.238586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.238619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.238748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.238780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.239067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.239100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.239238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.239273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.239397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.239429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.914 [2024-12-11 10:06:37.239561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-12-11 10:06:37.239595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.914 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.239777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.239809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.239916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.239950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.240131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.240164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.240349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.240384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.240557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.240589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.240742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.240776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.241051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.241084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.241216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.241257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.241485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.241517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.241646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.241678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.241980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.242012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.242129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.242162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.242298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.242331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.242591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.242624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.242728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.242760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.243034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.243068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.243258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.243292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.243486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.243518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.243704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.243742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.244047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.244080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.244350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.244383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.244634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.244667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.244812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.244845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.245046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.245078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.245198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.245237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.245347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.245378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.245617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.245649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.245854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.245886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.246155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.246189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.246371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.246404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.246541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.246572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.246744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.246775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.246918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.246951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.247241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-12-11 10:06:37.247276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.915 qpair failed and we were unable to recover it. 00:28:27.915 [2024-12-11 10:06:37.247494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.247526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.247741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.247774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.248048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.248081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.248215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.248276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.248419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.248452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.248641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.248672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.248889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.248921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.249095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.249127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.249325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.249360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.249508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.249540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.249680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.249712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.250063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.250135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.250439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.250480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.250685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.250719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.250911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.250944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.251206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.251254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.251377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.251410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.251541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.251573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.251756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.251788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.251905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.251937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.252121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.252155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.252365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.252399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.252664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.252698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.252988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.253021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.253286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.253329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.253543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.253577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.253768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.253800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.253971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.254004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.254187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.254228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.254458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.254490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.254680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.254713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.254927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.254960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.255130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.255162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.255374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.255408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.255599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.255632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.255881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-12-11 10:06:37.255915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.916 qpair failed and we were unable to recover it. 00:28:27.916 [2024-12-11 10:06:37.256153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.256185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.256437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.256472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.256675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.256708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.256986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.257019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.257282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.257317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.257531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.257565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.257748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.257782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.257959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.257991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.258175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.258209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.258465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.258499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.258630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.258663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.258797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.258831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.259114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.259147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.259318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.259352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.259500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.259533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.259943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.260016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.260164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.260201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.260368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.260402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.260544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.260578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.260720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.260754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.260883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.260916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.261106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.261139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.261385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.261437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.261614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.261648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.261799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.261837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.262028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.262063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.262263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.262298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.262439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.262472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.262655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.262689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.262835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.262870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.263042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.263075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.263341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.263375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.263569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.263609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.263729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.263762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.263954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.917 [2024-12-11 10:06:37.263986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.917 qpair failed and we were unable to recover it. 00:28:27.917 [2024-12-11 10:06:37.264164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.264207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.264350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.264389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.264615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.264648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.264792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.264824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.265019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.265052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.265337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.265372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.265517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.265550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.265686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.265737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.265996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.266032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.266255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.266289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.266565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.266602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.266905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.266942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.267077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.267112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.267365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.267399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.267535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.267568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.267750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.267784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.267988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.268021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.268194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.268236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.268420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.268456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.268650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.268684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.268871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.268907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.269096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.269130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.269356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.269394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.269583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.269618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.269822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.269857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.269972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.270006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.270236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.270273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.270456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.270492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.270689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.270722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.270913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.270946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.271194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.271237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.271359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.271393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.271631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.271665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.271869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.271906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.272193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.272237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.918 [2024-12-11 10:06:37.272441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.918 [2024-12-11 10:06:37.272484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.918 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.272695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.272734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.272990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.273026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.273232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.273269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.273555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.273589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.273720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.273757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.273889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.273924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.274176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.274214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.274430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.274464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.274602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.274635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.274900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.274938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.275197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.275240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.275427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.275463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.275717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.275757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.276057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.276093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.276277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.276311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.276450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.276483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.276734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.276771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.277056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.277090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.277358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.277395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.277609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.277645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.277895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.277937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.278068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.278101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.278325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.278361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.278550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.278586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.278724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.278757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.279062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.279096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.279328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.279362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.279556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.279590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.279882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.279915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.280207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.280263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.280480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.919 [2024-12-11 10:06:37.280513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.919 qpair failed and we were unable to recover it. 00:28:27.919 [2024-12-11 10:06:37.280639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.280673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.280818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.280851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.281113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.281148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.281313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.281349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.281483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.281517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.281695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.281728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.282022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.282055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.282295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.282329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.282438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.282477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.282620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.282653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.282830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.282864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.283126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.283159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.283312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.283346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.283468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.283502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.283692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.283724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.284030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.284064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.284306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.284342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.284623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.284657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.284874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.284908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.285123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.285158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.285365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.285400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.285530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.285564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.285827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.285862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.286118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.286153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.286450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.286486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.286615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.286649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.286854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.286888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.287154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.287188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.287479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.287513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.287710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.287745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.287963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.287997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.288140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.288173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.288394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.288428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.288558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.288592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.288719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.920 [2024-12-11 10:06:37.288753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.920 qpair failed and we were unable to recover it. 00:28:27.920 [2024-12-11 10:06:37.289031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.289064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.289203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.289250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.289450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.289484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.289615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.289648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.289778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.289811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.289931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.289965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.290202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.290245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.290381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.290414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.290546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.290579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.290763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.290796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.291040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.291073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.291256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.291291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.291487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.291521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.291770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.291804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.292095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.292134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.292440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.292474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.292717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.292751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.293072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.293105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.293341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.293376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.293508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.293542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.293736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.293769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.294091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.294125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.294311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.294346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.294473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.294507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.294685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.294718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.295044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.295079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.295267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.295301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.295442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.295475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.295629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.295665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.295932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.295966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.296190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.296240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.296436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.296472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.296749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.296782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.296956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.296990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.297263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.297298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.297561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.921 [2024-12-11 10:06:37.297594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.921 qpair failed and we were unable to recover it. 00:28:27.921 [2024-12-11 10:06:37.297824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.297858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.298047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.298081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.298264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.298299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.298509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.298542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.298757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.298790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.298945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.298985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.299102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.299135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.299363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.299398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.299643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.299678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.299946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.299979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.300154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.300188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.300461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.300495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.300639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.300673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.300823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.300856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.301072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.301106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.301281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.301315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.301510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.301545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.301692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.301725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.301865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.301898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.302143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.302176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.302382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.302416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.302661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.302695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.302996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.303030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.303289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.303324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.303613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.303646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.303843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.303876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.304126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.304159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.304370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.304404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.304578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.304612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.304791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.304824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.305006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.305040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.305290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.305325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.305545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.305579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.305847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.305880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.306078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.922 [2024-12-11 10:06:37.306112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.922 qpair failed and we were unable to recover it. 00:28:27.922 [2024-12-11 10:06:37.306312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.306346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.306613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.306646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.306785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.306820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.307011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.307044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.307237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.307271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.307464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.307497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.307682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.307715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.307948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.307981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.308226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.308262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.308435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.308469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.308731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.308764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.309026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.309066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.309343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.309377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.309569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.309601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.309795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.309828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.310012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.310046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.310261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.310295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.310547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.310581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.310768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.310801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.310975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.311009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.311215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.311256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.311377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.311410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.311673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.311706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.311910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.311943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.312145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.312177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.312436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.312507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.312757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.312793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.313083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.313117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.313320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.313355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.313597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.313630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.313818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.313852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.314035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.314069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.314338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.314373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.314565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.314600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.314841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.923 [2024-12-11 10:06:37.314875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.923 qpair failed and we were unable to recover it. 00:28:27.923 [2024-12-11 10:06:37.315055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.315089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.315280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.315314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.315531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.315564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.315694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.315736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.315922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.315956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.316202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.316245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.316432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.316466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.316655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.316687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.316831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.316864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.317106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.317138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.317378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.317412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.317617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.317649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.317786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.317819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.318021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.318055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.318166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.318199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.318318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.318350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.318546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.318579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.318767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.318800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.318984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.319017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.319256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.319291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.319474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.319508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.319703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.319736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.319924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.319957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.320085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.320118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.320324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.320359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.320498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.320532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.320710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.320742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.321024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.321058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.321239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.924 [2024-12-11 10:06:37.321275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.924 qpair failed and we were unable to recover it. 00:28:27.924 [2024-12-11 10:06:37.321470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.321503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.321712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.321745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.321951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.321984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.322246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.322280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.322569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.322604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.322744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.322777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.323066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.323101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.323367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.323402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.323580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.323613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.323858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.323892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.324183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.324225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.324436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.324468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.324604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.324637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.324777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.324811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.324981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.325021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.325169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.325203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.325428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.325463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.325590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.325624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.325844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.325877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.326132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.326165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.326451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.326486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.326673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.326706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.326814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.326847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.326974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.327008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.327235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.327269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.327443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.327477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.327721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.327754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.327884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.327917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.328108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.328142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.328410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.328445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.328633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.328666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.328861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.328894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.329079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.329112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.329388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.329423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.329564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.329597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.925 qpair failed and we were unable to recover it. 00:28:27.925 [2024-12-11 10:06:37.329850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.925 [2024-12-11 10:06:37.329883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.330170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.330204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.330476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.330509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.330753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.330785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.331001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.331034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.331235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.331269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.331452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.331486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.331673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.331706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.331899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.331931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.332128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.332160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.332412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.332447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.332715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.332748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.332938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.332971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.333256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.333294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.333412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.333505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.333774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.333808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.334048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.334082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.334362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.334395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.334664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.334698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.334896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.334935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.335114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.335147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.335413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.335447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.335689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.335721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.335989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.336023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.336140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.336173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.336393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.336428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.336611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.336644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.336889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.336922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.337165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.337198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.337402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.337437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.337578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.337611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.337787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.337820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.338005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.338039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.338168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.338202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.338406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.338440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.926 [2024-12-11 10:06:37.338624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.926 [2024-12-11 10:06:37.338658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.926 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.338838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.338870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.339064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.339098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.339345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.339379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.339517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.339550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.339707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.339740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.339973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.340006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.340299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.340333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.340461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.340494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.340710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.340743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.340863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.340895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.341164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.341197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.341463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.341496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.341771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.341804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.342044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.342077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.342270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.342304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.342516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.342549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.342791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.342824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.342960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.342992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.343254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.343289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.343436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.343469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.343700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.343734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.343913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.343946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.344132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.344165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.344380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.344420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.344597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.344632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.344837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.344870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.345131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.345165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.345378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.345412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.345553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.345586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.345784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.345817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.345952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.345984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.346163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.346196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.346486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.346520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.346730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.346764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.927 qpair failed and we were unable to recover it. 00:28:27.927 [2024-12-11 10:06:37.347025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.927 [2024-12-11 10:06:37.347058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.347330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.347364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.347581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.347613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.347880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.347913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.348089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.348122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.348301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.348335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.348532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.348565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.348764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.348798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.348982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.349015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.349212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.349254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.349383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.349416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.349536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.349568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.349702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.349736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.349972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.350005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.350131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.350164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.350364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.350398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.350649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.350682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.350943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.350976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.351149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.351182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.351349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.351383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.351588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.351621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.351834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.351866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.351996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.352029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.352321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.352354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.352619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.352652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.352803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.352837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.352964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.352997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.353225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.353259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.353410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.353443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.353632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.353671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.353899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.353932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.354197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.354237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.354414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.354447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.354630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.354664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.354789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.928 [2024-12-11 10:06:37.354822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.928 qpair failed and we were unable to recover it. 00:28:27.928 [2024-12-11 10:06:37.354999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.355032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.355273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.355308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.355504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.355536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.355721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.355754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.355945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.355978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.356229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.356262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.356456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.356489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.356733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.356766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.356964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.356997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.357183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.357216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.357373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.357406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.357680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.357713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.357983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.358015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.358209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.358252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.358386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.358419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.358595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.358627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.358884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.358917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.359108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.359142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.359359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.359394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.359660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.359693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.359977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.360010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.360194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.360238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.360516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.360549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.360818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.360851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.361046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.361078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.361265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.361298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.361564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.361597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.361863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.361897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.362093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.362127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.362387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.362421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.929 [2024-12-11 10:06:37.362551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.929 [2024-12-11 10:06:37.362585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.929 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.362779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.362812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.363077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.363110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.363253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.363288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.363517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.363556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.363751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.363785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.364047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.364081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.364207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.364258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.364453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.364486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.364709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.364742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.364933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.364966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.365203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.365244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.365428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.365461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.365652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.365686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.365882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.365916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.366093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.366126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.366314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.366349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.366592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.366625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.366738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.366772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.366909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.366942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.367207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.367249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.367494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.367528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.367729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.367762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.368032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.368066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.368210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.368253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.368447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.368480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.368627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.368661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.368935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.368968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.369083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.369117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.369262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.369296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.369542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.369575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.369855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.369889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.370084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.370117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.370379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.370413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.370556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.370589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.370704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.930 [2024-12-11 10:06:37.370738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.930 qpair failed and we were unable to recover it. 00:28:27.930 [2024-12-11 10:06:37.370934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.370967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.371167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.371201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.371392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.371426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.371613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.371646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.371828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.371861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.372050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.372083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.372274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.372307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.372500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.372533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.372719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.372758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.372962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.372995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.373242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.373277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.373472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.373506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.373686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.373719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.373918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.373952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.374148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.374182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.374446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.374481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.374672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.374705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.374939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.374972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.375207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.375249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.375369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.375402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.375649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.375682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.375968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.376002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.376309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.376344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.376545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.376578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.376706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.376739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.376867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.376900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.377042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.377074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.377228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.377262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.377526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.377560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.377705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.377738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.377869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.377902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.378079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.378112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.378368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.378402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.378551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.378584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.378702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.931 [2024-12-11 10:06:37.378735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.931 qpair failed and we were unable to recover it. 00:28:27.931 [2024-12-11 10:06:37.378987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.379021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.379144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.379178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.379458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.379492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.379761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.379794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.379994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.380026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.380207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.380248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.380492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.380525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.380708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.380741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.380869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.380903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.381098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.381131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.381364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.381397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.381632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.381665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.381912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.381945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.382139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.382177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.382430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.382465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.382656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.382690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.382907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.382940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.383148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.383181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.383385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.383420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.383609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.383642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.383930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.383962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.384210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.384254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.384571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.384607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.384869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.384903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.385146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.385180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.385348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.385383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.385566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.385598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.385733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.385766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.385963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.385997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.386143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.386177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.386385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.386421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.386606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.386639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.386915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.386947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.387142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.387176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.932 [2024-12-11 10:06:37.387384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.932 [2024-12-11 10:06:37.387419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.932 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.387671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.387704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.387895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.387928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.388174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.388207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.388396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.388431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.388581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.388615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.388802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.388837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.389066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.389100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.389239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.389273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.389405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.389439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.389704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.389737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.389861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.389894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.390114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.390147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.390335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.390370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.390561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.390594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.390697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.390730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.390881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.390915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.391055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.391088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.391299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.391333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.391518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.391557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.391744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.391778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.391903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.391936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.392124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.392157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.392358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.392393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.392519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.392551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.392698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.392732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.393028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.393063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.393188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.393281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.393542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.393576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.393722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.393754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.394019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.394051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.394302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.394337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.394519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.394553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.394753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.394787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.395084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.395118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.933 qpair failed and we were unable to recover it. 00:28:27.933 [2024-12-11 10:06:37.395309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.933 [2024-12-11 10:06:37.395344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.395490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.395523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.395721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.395754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.395965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.395999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.396179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.396213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.396424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.396459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.396656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.396689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.396953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.396986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.397111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.397144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.397426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.397461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.397745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.397778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.398054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.398089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.398294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.398329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.398472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.398505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.398714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.398748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.398900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.398933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.399130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.399163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.399407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.399442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.399622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.399656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.399907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.399942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.400074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.400107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.400345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.400380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.400494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.400527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.400667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.400701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.400976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.401015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.401140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.401173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.401436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.401470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.401601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.401634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.401824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.401859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.402077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.402111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.402305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.934 [2024-12-11 10:06:37.402341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.934 qpair failed and we were unable to recover it. 00:28:27.934 [2024-12-11 10:06:37.402526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.402559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.402755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.402788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.403010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.403044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.403243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.403277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.403426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.403460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.403752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.403788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.403902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.403935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.404247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.404282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.404510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.404543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.404672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.404705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.404908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.404942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.405213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.405255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.405381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.405415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.405568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.405600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.405802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.405835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.406115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.406149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.406431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.406466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.406619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.406654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.406794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.406828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.407105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.407139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.407335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.407371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.407510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.407543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.407749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.407782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.407918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.407951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.408235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.408271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.408479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.408512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.408693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.408727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.409030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.409063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.409250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.409284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.409496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.409529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.409710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.409743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.409939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.409973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.410268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.410303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.410556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.410595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.935 [2024-12-11 10:06:37.410911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.935 [2024-12-11 10:06:37.410945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.935 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.411232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.411267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.411404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.411438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.411626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.411660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.411785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.411819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.412021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.412055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.412252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.412287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.412485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.412518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.412799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.412833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.412955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.412989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.413241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.413276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.413424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.413458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.413593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.413640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.413843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.413878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.414142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.414175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.414320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.414355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.414570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.414603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.414750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.414784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.414945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.414979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.415254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.415289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.415471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.415504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.415705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.415738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.415932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.415967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.416170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.416205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.416443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.416478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.416731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.416766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.417050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.417127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.417390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.417429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.417640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.417676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.417926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.417960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.418082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.418115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.418370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.418405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.418628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.418662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.418962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.936 [2024-12-11 10:06:37.418996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.936 qpair failed and we were unable to recover it. 00:28:27.936 [2024-12-11 10:06:37.419265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.419302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.419520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.419554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.419806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.419842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.420038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.420073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.420344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.420381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.420665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.420708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.421021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.421054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.421268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.421303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.421501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.421535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.421826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.421861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.422139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.422173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.422330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.422365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.422547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.422580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.422804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.422838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.423093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.423126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.423362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.423397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.423602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.423636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.423867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.423903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.424104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.424138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.424352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.424388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.424505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.424539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.424690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.424723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.424947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.424980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.425160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.425194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.425357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.425391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.425663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.425698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.425974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.937 [2024-12-11 10:06:37.426009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.937 qpair failed and we were unable to recover it. 00:28:27.937 [2024-12-11 10:06:37.426207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.426249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.426386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.426420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.426608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.426641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.426760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.426794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.427052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.427086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.427361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.427397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.427551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.427584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.427713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.427747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.428055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.428089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.428363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.428400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.428546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.428581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.428839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.428873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.429141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.429175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.429476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.429511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.429697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.429731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.429892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.429926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.430127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.430160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.430424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.430458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.430617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.430656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.430908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.430941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.431213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.431255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.431478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.431511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.431732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.431766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.431968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.432003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.432198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.432242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.432444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.432477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.432751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.432785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.432925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.432959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.433152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.433185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.433409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.433444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.433586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.433619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.433806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.433840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.434154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.938 [2024-12-11 10:06:37.434190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.938 qpair failed and we were unable to recover it. 00:28:27.938 [2024-12-11 10:06:37.434441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.434475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.434633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.434668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.435028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.435063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.435195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.435254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.435465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.435499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.435701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.435736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.435986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.436020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.436319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.436354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.436554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.436588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.436776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.436810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.437087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.437121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.437306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.437340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.437546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.437580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.437781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.437815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.438082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.438117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.438325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.438359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.438508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.438542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.438747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.438781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.439053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.439087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.439280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.439314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.439447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.439481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.439626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.439660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.439792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.439826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.440084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.440118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.440339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.440374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.440530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.440564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.440766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.440800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.441095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.441129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.441317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.441351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.441637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.441671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.441810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.939 [2024-12-11 10:06:37.441845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.939 qpair failed and we were unable to recover it. 00:28:27.939 [2024-12-11 10:06:37.442195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.442260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.442520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.442554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.442754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.442789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.443046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.443081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.443279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.443316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.443528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.443562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.443724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.443758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.443968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.444004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.444151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.444185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.444424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.444459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.444737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.444771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.445049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.445082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.445316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.445352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.445627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.445661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.445812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.445847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.446043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.446079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.446306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.446340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.446541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.446575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.446787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.446821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.447108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.447143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.447405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.447440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.447643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.447683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.447892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.447926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.448134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.448169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.448371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.448406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.448678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.448711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.448933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.448966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.449165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.449199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.449434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.449469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.449621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.449657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.449899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.449933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.450242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.450278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.450431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.450464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.450754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.450789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.450980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.451015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.451310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.451345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.940 qpair failed and we were unable to recover it. 00:28:27.940 [2024-12-11 10:06:37.451537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.940 [2024-12-11 10:06:37.451572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.941 qpair failed and we were unable to recover it. 00:28:27.941 [2024-12-11 10:06:37.451762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.941 [2024-12-11 10:06:37.451796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.941 qpair failed and we were unable to recover it. 00:28:27.941 [2024-12-11 10:06:37.452054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.941 [2024-12-11 10:06:37.452089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.941 qpair failed and we were unable to recover it. 00:28:27.941 [2024-12-11 10:06:37.452274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.941 [2024-12-11 10:06:37.452309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:27.941 qpair failed and we were unable to recover it. 00:28:28.218 [2024-12-11 10:06:37.452509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.218 [2024-12-11 10:06:37.452543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.218 qpair failed and we were unable to recover it. 00:28:28.218 [2024-12-11 10:06:37.452694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.218 [2024-12-11 10:06:37.452730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.218 qpair failed and we were unable to recover it. 00:28:28.218 [2024-12-11 10:06:37.452944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.218 [2024-12-11 10:06:37.452978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.218 qpair failed and we were unable to recover it. 00:28:28.218 [2024-12-11 10:06:37.453181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.218 [2024-12-11 10:06:37.453241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.218 qpair failed and we were unable to recover it. 00:28:28.218 [2024-12-11 10:06:37.453544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.218 [2024-12-11 10:06:37.453580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.218 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.453778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.453812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.454016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.454051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.454248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.454283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.454574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.454607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.454735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.454769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.455082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.455117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.455279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.455312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.455497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.455532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.455684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.455719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.456002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.456036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.456268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.456303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.456532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.456567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.456767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.456800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.456993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.457028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.457261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.457296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.457439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.457473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.457685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.457725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.457878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.457913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.458205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.458254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.458473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.458507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.458787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.458821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.459148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.459184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.459419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.459454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.459731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.459765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.459986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.460021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.460215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.460259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.460378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.460411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.460618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.460652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.460806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.219 [2024-12-11 10:06:37.460851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.219 qpair failed and we were unable to recover it. 00:28:28.219 [2024-12-11 10:06:37.461149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.461183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.461362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.461397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.461547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.461581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.461788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.461822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.461958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.461993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.462274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.462310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.462500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.462534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.462731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.462764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.462972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.463005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.463197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.463238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.463519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.463553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.463758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.463791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.463983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.464017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.464300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.464335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.464543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.464578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.464708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.464741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.464990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.465024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.465208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.465250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.465382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.465416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.465628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.465662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.465798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.465832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.466107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.466141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.466319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.466355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.466558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.466592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.466792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.466827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.466944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.466979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.467164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.467197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.467343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.467384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.467605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.467638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.467925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.467959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.468240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.468275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.468549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.468585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.468941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.468975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.469237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.469272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.220 [2024-12-11 10:06:37.469458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.220 [2024-12-11 10:06:37.469492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.220 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.469652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.469687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.469825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.469860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.470072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.470106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.470438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.470473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.470659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.470693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.470840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.470874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.471156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.471191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.471342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.471377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.471575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.471608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.471744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.471779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.472093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.472131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.472382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.472418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.472708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.472742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.472974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.473007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.473200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.473243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.473497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.473531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.473803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.473837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.474030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.474063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.474201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.474246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.474486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.474520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.474845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.474880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.475184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.475228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.475437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.475471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.475620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.475653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.475790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.475824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.476020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.476054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.476308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.476345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.476549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.476584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.476884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.476918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.477195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.477237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.477374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.477408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.477619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.477653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.477861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.477900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.478088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.478123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.221 qpair failed and we were unable to recover it. 00:28:28.221 [2024-12-11 10:06:37.478308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.221 [2024-12-11 10:06:37.478344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.478475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.478509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.478624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.478658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.478915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.478949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.479176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.479210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.479482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.479517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.479735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.479768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.480040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.480074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.480284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.480320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.480464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.480498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.480703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.480738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.481042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.481075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.481340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.481375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.481570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.481604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.481856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.481889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.482071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.482105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.482336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.482373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.482575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.482610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.482755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.482789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.482995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.483030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.483259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.483295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.483492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.483525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.483672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.483706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.483920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.483955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.484090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.484124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.484360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.484396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.484581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.484616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.484805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.484838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.485023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.485058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.485249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.485284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.485566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.485599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.485744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.485778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.485940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.485974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.486196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.486238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.486354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.486388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.222 qpair failed and we were unable to recover it. 00:28:28.222 [2024-12-11 10:06:37.486599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.222 [2024-12-11 10:06:37.486634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.486799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.486832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.487021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.487055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.487314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.487355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.487578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.487612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.487763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.487798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.487930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.487964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.488179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.488213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.488450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.488485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.488613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.488647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.488893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.488929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.489142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.489176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.489496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.489531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.489718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.489752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.490062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.490096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.490357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.490393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.490525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.490559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.490825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.490860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.491068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.491102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.491330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.491365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.491594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.491627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.491772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.491806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.492013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.492046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.492240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.492275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.492429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.492463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.492732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.492768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.492972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.493007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.493196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.493236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.493446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.493480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.493618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.493652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.223 [2024-12-11 10:06:37.493852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.223 [2024-12-11 10:06:37.493887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.223 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.494167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.494201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.494407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.494444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.494720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.494754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.494976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.495011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.495197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.495243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.495519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.495554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.495760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.495793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.495991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.496026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.496162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.496197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.496440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.496476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.496708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.496742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.496897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.496932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.497058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.497099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.497349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.497385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.497602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.497636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.497793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.497827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.498085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.498123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.498341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.498377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.498566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.498600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.498784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.498819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.499112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.499148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.499434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.499468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.499679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.499713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.499920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.499953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.500171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.500204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.500447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.500482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.500610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.500644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.500777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.500811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.224 [2024-12-11 10:06:37.501089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.224 [2024-12-11 10:06:37.501123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.224 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.501340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.501376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.501506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.501539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.501662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.501695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.502065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.502099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.502309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.502342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.502541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.502575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.502783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.502818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.503089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.503123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.503383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.503418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.503571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.503605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.503766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.503800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.504090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.504124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.504275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.504310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.504461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.504495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.504691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.504724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.504931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.504964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.505231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.505266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.505410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.505444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.505748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.505781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.506076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.506110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.506249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.506284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.506510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.506544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.506746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.506780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.507080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.507120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.507405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.507439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.507623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.507657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.507798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.507830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.507964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.507998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.508135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.508168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.508376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.508412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.508523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.508557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.508833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.508865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.509136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.509169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.225 qpair failed and we were unable to recover it. 00:28:28.225 [2024-12-11 10:06:37.509465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.225 [2024-12-11 10:06:37.509500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.509682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.509716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.509996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.510029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.510296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.510332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.510534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.510568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.510821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.510856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.511159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.511193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.511484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.511519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.511791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.511825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.512050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.512083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.512275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.512311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.512519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.512552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.512751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.512784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.512993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.513027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.513214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.513258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.513511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.513545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.513748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.513782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.514054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.514088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.514371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.514407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.514613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.514646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.514916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.514951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.515255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.515290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.515497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.515532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.515658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.515693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.515940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.515974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.516191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.516235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.516444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.516479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.516619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.516652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.516958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.516993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.517175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.517210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.517353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.517393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.517686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.517720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.517859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.517893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.518097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.518131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.518417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.518451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.226 qpair failed and we were unable to recover it. 00:28:28.226 [2024-12-11 10:06:37.518732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.226 [2024-12-11 10:06:37.518766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.519068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.519102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.519302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.519336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.519612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.519646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.519850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.519884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.520140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.520174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.520447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.520482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.520666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.520699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.520882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.520915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.521136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.521169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.521460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.521495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.521769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.521803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.522085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.522118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.522239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.522274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.522463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.522496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.522702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.522736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.522948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.522981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.523183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.523224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.523487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.523521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.523776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.523810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.524108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.524142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.524344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.524379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.524659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.524693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.524842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.524876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.525133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.525167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.525345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.525379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.525595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.525629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.525835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.525869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.526006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.526040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.526263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.526299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.526437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.526471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.526749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.526782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.227 [2024-12-11 10:06:37.527065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.227 [2024-12-11 10:06:37.527098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.227 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.527369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.527404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.527545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.527580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.527836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.527875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.528164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.528198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.528473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.528508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.528766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.528800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.529075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.529109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.529317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.529351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.529564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.529598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.529753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.529786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.530063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.530097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.530297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.530334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.530519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.530552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.530835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.530869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.531135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.531169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.531487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.531523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.531659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.531693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.531967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.532000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.532204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.532248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.532534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.532568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.532831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.532865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.533079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.533113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.533312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.533347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.533629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.533662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.533962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.533997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.534268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.534304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.534580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.534615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.534899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.534933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.535189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.535232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.535462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.535497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.535703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.228 [2024-12-11 10:06:37.535736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.228 qpair failed and we were unable to recover it. 00:28:28.228 [2024-12-11 10:06:37.536014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.536047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.536358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.536392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.536675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.536709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.536906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.536940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.537174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.537208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.537478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.537512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.537696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.537730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.538009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.538043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.538325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.538361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.538641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.538676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.538932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.538966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.539248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.539289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.539511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.539546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.539767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.539801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.539985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.540019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.540285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.540320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.540550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.540585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.540781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.540814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.541090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.541124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.541310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.541346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.541549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.541583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.541837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.541871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.542073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.542108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.542388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.542424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.542629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.542663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.542851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.542886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.543142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.543176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.229 [2024-12-11 10:06:37.543422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.229 [2024-12-11 10:06:37.543457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.229 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.543756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.543791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.544058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.544093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.544348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.544384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.544674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.544708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.545000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.545034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.545307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.545343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.545610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.545645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.545877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.545912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.546191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.546243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.546380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.546414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.546604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.546638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.546913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.546947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.547159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.547192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.547394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.547428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.547700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.547734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.548011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.548045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.548332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.548367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.548641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.548674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.548950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.548983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.549302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.549336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.549621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.549655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.549786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.549820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.550025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.550057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.550251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.550293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.550495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.550529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.550724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.550757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.551033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.551067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.551258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.551293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.551554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.551588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.551771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.551804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.552084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.552118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.552321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.552356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.552659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.552693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.552949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.552984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.230 [2024-12-11 10:06:37.553337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.230 [2024-12-11 10:06:37.553372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.230 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.553611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.553644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.553923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.553956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.554252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.554288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.554585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.554619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.554748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.554782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.554986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.555021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.555211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.555257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.555398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.555431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.555576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.555609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.555829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.555863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.555982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.556016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.556298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.556333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.556537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.556572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.556858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.556892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.557019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.557053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.557251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.557287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.557492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.557526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.557780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.557815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.557941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.557975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.558258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.558292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.558406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.558440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.558578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.558613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.558891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.558925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.559110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.559144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.559412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.559447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.559593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.559628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.559904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.559938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.560142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.560177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.560402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.560442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.560728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.560762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.561034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.561068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.561351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.561386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.561518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.561552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.561807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.561840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.231 qpair failed and we were unable to recover it. 00:28:28.231 [2024-12-11 10:06:37.562139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.231 [2024-12-11 10:06:37.562174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.562438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.562473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.562680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.562714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.562932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.562967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.563103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.563137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.563332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.563366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.563574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.563608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.563801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.563835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.563958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.563992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.564267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.564302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.564504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.564538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.564818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.564852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.565135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.565169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.565467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.565502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.565724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.565758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.566011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.566046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.566291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.566327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.566585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.566619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.566781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.566815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.567092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.567127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.567324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.567358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.567518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.567553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.567768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.567802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.567928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.567962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.568283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.568319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.568533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.568568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.568777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.568810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.569016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.569051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.569238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.569273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.569471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.569505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.569697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.569731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.569960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.569994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.570270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.570306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.570593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.570628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.570834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.232 [2024-12-11 10:06:37.570868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.232 qpair failed and we were unable to recover it. 00:28:28.232 [2024-12-11 10:06:37.571019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.571054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.571240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.571277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.571478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.571512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.571701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.571734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.572017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.572052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.572268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.572302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.572558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.572591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.572740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.572774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.572965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.573000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.573236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.573271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.573528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.573562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.573680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.573713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.573909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.573943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.574234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.574270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.574538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.574572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.574853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.574886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.575169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.575204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.575509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.575544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.575750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.575784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.576065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.576099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.576306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.576341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.576629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.576664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.576943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.576977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.577264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.577297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.577583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.577618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.577758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.577793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.577978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.578017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.578275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.578311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.578511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.578546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.578817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.578850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.579051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.579086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.579270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.579305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.579590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.579625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.579831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.579865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.580161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.233 [2024-12-11 10:06:37.580196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.233 qpair failed and we were unable to recover it. 00:28:28.233 [2024-12-11 10:06:37.580494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.580528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.580721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.580755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.581010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.581044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.581344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.581378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.581587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.581621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.581817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.581851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.582103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.582136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.582416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.582451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.582739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.582774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.582989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.583022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.583301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.583337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.583538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.583571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.583803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.583838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.584043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.584077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.584272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.584307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.584560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.584594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.584778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.584813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.585096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.585130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.585442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.585478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.585781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.585815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.585956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.585989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.586194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.586245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.586444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.586478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.586762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.586796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.586982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.587016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.587233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.587267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.587453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.587486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.587764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.587798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.588112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.588144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.588425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.588460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.234 [2024-12-11 10:06:37.588744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.234 [2024-12-11 10:06:37.588779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.234 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.589056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.589094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.589365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.589400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.589682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.589716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.589938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.589972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.590157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.590190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.590416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.590451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.590644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.590678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.590791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.590825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.591103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.591137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.591420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.591456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.591642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.591675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.591872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.591906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.592184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.592229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.592508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.592543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.592665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.592699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.592974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.593008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.593286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.593322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.593439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.593473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.593676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.593710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.593931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.593966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.594113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.594146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.594429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.594464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.594742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.594776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.594910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.594944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.595162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.595195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.595510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.595544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.595821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.595854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.595981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.596014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.596271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.596306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.596586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.596620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.596806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.596840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.597108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.597142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.597290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.597326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.597511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.597545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.235 qpair failed and we were unable to recover it. 00:28:28.235 [2024-12-11 10:06:37.597683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.235 [2024-12-11 10:06:37.597717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.597852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.597885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.598083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.598117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.598317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.598351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.598621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.598655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.598841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.598876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.599073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.599112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.599381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.599417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.599603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.599637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.599933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.599966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.600212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.600256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.600519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.600554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.600841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.600875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.601155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.601188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.601491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.601526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.601789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.601823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.602035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.602069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.602253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.602290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.602502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.602536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.602726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.602760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.603020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.603055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.603267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.603302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.603490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.603524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.603731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.603764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.604044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.604078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.604264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.604299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.604581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.604615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.604875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.604908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.605207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.605250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.605510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.605543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.605834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.605869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.606146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.606180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.606378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.606413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.606681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.606716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.606992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.236 [2024-12-11 10:06:37.607026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.236 qpair failed and we were unable to recover it. 00:28:28.236 [2024-12-11 10:06:37.607142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.607176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.607397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.607433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.607688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.607722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.608039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.608073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.608351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.608387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.608517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.608551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.608747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.608781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.609036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.609070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.609344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.609380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.609640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.609674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.609804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.609839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.609976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.610020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.610247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.610281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.610491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.610525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.610779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.610813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.611069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.611103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.611405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.611439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.611704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.611738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.611991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.612025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.612323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.612356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.612610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.612645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.612869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.612902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.613104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.613137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.613392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.613427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.613613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.613647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.613791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.613825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.614080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.614113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.614327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.614361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.614548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.614582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.614838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.614872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.615056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.615090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.615372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.615407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.615617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.615651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.615932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.615966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.616170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.237 [2024-12-11 10:06:37.616204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.237 qpair failed and we were unable to recover it. 00:28:28.237 [2024-12-11 10:06:37.616347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.616381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.616603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.616637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.616910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.616945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.617157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.617193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.617475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.617510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.617727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.617762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.617897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.617931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.618126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.618161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.618365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.618401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.618589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.618624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.618887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.618920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.619107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.619141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.619447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.619483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.619738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.619772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.620071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.620105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.620372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.620407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.620669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.620708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.620991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.621025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.621302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.621336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.621622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.621656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.621951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.621985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.622256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.622290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.622578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.622612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.622913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.622947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.623178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.623212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.623378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.623413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.623688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.623721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.623983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.624017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.624213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.624259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.624520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.624553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.624747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.624781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.625061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.625095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.625359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.625395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.625684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.625718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.238 [2024-12-11 10:06:37.625989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.238 [2024-12-11 10:06:37.626024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.238 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.626315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.626351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.626555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.626589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.626771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.626805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.626990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.627024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.627255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.627291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.627597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.627631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.627819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.627853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.628132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.628166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.628398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.628434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.628707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.628741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.629025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.629058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.629300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.629335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.629494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.629528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.629713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.629746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.629972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.630006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.630231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.630265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.630478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.630512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.630698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.630732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.631009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.631042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.631313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.631349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.631483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.631518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.631795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.631834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.632096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.632130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.632416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.632451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.632771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.632806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.633079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.633112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.633404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.633440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.633715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.633750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.634035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.634068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.634347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.634383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.239 [2024-12-11 10:06:37.634614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.239 [2024-12-11 10:06:37.634648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.239 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.634869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.634903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.635105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.635140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.635419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.635455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.635572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.635606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.635765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.635799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.635981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.636015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.636204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.636247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.636448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.636482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.636776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.636809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.636943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.636976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.637277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.637313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.637577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.637611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.637842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.637876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.638065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.638100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.638370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.638404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.638609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.638643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.638780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.638813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.639115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.639149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.639416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.639451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.639586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.639618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.639817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.639851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.640131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.640166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.640365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.640400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.640612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.640646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.640901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.640935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.641189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.641232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.641509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.641543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.641777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.641812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.642007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.642041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.642297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.642332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.642637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.642678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.642956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.642989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.643248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.643283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.643493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.643526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.240 [2024-12-11 10:06:37.643711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.240 [2024-12-11 10:06:37.643744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.240 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.644020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.644055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.644269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.644304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.644560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.644594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.644781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.644815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.645115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.645149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.645443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.645478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.645697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.645731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.645986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.646019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.646241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.646276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.646470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.646504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.646780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.646814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.647094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.647128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.647414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.647450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.647727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.647761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.648041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.648075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.648361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.648396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.648672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.648707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.648935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.648969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.649197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.649239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.649513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.649547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.649739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.649772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.649976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.650010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.650228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.650264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.650537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.650571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.650803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.650838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.651100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.651134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.651420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.651455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.651590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.651624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.651880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.651914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.652191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.652233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.652453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.652488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.652742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.652777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.652965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.653000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.653183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.241 [2024-12-11 10:06:37.653227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.241 qpair failed and we were unable to recover it. 00:28:28.241 [2024-12-11 10:06:37.653507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.653540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.653675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.653713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.654016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.654050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.654335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.654370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.654570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.654605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.654859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.654894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.655198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.655245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.655503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.655538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.655675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.655708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.655985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.656019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.656231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.656266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.656547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.656580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.656877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.656911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.657181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.657216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.657537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.657571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.657785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.657820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.658016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.658050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.658329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.658365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.658666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.658699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.658957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.658991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.659282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.659317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.659608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.659642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.659846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.659880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.660160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.660194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.660498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.660533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.660792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.660826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.661127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.661164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.661458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.661494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.661714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.661749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.661938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.661972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.662242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.662278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.662439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.662474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.662592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.662626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.662886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.662920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.242 [2024-12-11 10:06:37.663205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.242 [2024-12-11 10:06:37.663251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.242 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.663535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.663570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.663838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.663873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.664139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.664173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.664375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.664410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.664620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.664653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.664903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.664936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.665252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.665299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.665433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.665468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.665734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.665767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.665990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.666022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.666161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.666196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.666342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.666376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.666561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.666596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.666850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.666885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.667079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.667114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.667323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.667359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.667616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.667650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.667854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.667889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.668166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.668201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.668352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.668387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.668609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.668643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.668948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.668983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.669237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.669272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.669481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.669515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.669772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.669806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.670079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.670113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.670397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.670432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.670623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.670657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.670867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.670902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.671181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.671215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.671500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.671535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.671664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.671699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.671910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.671944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.672131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.672166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.243 qpair failed and we were unable to recover it. 00:28:28.243 [2024-12-11 10:06:37.672460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.243 [2024-12-11 10:06:37.672496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.672764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.672798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.673062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.673097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.673376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.673413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.673670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.673705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.673905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.673939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.674210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.674257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.674536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.674570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.674838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.674873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.675157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.675192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.675473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.675508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.675693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.675728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.675933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.675973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.676118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.676152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.676377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.676412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.676547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.676580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.676834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.676868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.677088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.677122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.677276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.677312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.677508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.677542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.677740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.677774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.678028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.678062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.678256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.678291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.678591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.678625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.678887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.678921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.679235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.679270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.679491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.679526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.679827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.679860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.680128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.680162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.680463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.680498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.680759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.244 [2024-12-11 10:06:37.680792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.244 qpair failed and we were unable to recover it. 00:28:28.244 [2024-12-11 10:06:37.681094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.681128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.681389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.681425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.681725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.681759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.682024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.682058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.682327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.682363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.682560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.682593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.682800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.682834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.683125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.683159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.683386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.683421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.683628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.683662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.683937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.683970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.684247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.684284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.684512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.684545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.684762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.684795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.684992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.685026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.685262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.685298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.685429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.685463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.685657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.685691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.685881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.685914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.686134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.686168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.686384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.686420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.686677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.686716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.687020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.687053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.687331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.687366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.687573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.687607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.687799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.687833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.688109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.688144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.688455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.688489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.688698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.688731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.688931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.688964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.689198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.689243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.689500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.689534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.689791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.689825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.690023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.690057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.245 qpair failed and we were unable to recover it. 00:28:28.245 [2024-12-11 10:06:37.690210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.245 [2024-12-11 10:06:37.690254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.690446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.690480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.690684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.690718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.690997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.691031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.691212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.691271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.691478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.691512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.691701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.691734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.691987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.692020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.692155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.692188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.692414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.692449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.692635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.692668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.692972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.693006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.693192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.693238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.693438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.693472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.693667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.693701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.693987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.694020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.694296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.694332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.694618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.694652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.694953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.694987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.695251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.695286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.695513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.695548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.695851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.695886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.696148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.696182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.696411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.696446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.696631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.696666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.696849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.696883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.697159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.697194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.697466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.697506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.697704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.697738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.698024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.698059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.698358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.698394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.698628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.698664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.698876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.698910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.699187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.699231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.699433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.699468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.699650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.699684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.246 [2024-12-11 10:06:37.699913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.246 [2024-12-11 10:06:37.699947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.246 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.700088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.700122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.700271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.700306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.700512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.700547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.700749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.700784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.701046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.701081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.701278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.701314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.701510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.701544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.701728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.701762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.702055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.702088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.702362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.702399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.702690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.702724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.703013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.703047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.703365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.703400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.703542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.703576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.703759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.703793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.704072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.704105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.704376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.704411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.704730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.704765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.704906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.704939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.705215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.705259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.705555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.705590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.705774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.705808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.706040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.706073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.706382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.706417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.706623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.706657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.706855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.706888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.707162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.707197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.707419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.707453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.707733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.707766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.708037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.708071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.708335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.708369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.708581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.708615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.708878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.708911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.709190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.709233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.709507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.247 [2024-12-11 10:06:37.709540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.247 qpair failed and we were unable to recover it. 00:28:28.247 [2024-12-11 10:06:37.709842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.709877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.710136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.710170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.710493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.710529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.710645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.710679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.710983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.711016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.711201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.711260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.711419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.711454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.711636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.711669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.711869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.711902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.712167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.712200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.712495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.712530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.712725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.712760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.713016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.713050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.713242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.713278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.713561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.713594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.713836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.713869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.714127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.714160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.714294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.714330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.714586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.714619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.714873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.714907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.715092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.715125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.715337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.715372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.715555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.715595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.715724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.715758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.715986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.716020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.716294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.716329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.716613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.716647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.716926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.716960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.717149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.717182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.717489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.717525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.717809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.717843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.718120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.718155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.718443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.718478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.718665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.718699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.248 [2024-12-11 10:06:37.718882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.248 [2024-12-11 10:06:37.718917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.248 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.719171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.719205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.719508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.719543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.719669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.719703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.719885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.719919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.720194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.720240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.720521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.720556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.720829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.720863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.721148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.721181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.721463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.721498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.721753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.721787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.722063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.722096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.722354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.722390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.722604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.722638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.722756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.722789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.723026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.723060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.723283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.723318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.723572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.723605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.723911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.723945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.724127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.724161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.724424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.724459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.724737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.724771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.725057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.725092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.725365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.725400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.725606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.725640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.725920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.725953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.726242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.726277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.726467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.726500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.249 [2024-12-11 10:06:37.726710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.249 [2024-12-11 10:06:37.726749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.249 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.726959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.726993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.727181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.727215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.727514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.727548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.727846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.727880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.728086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.728120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.728240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.728276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.728479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.728512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.728641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.728674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.728897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.728930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.729146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.729180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.729471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.729506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.729655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.729689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.729878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.729912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.730174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.730208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.730352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.730386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.730661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.730695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.730903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.730938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.731072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.731105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.731326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.731362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.731661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.731694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.731978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.732012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.732292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.732327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.732511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.732545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.732810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.732844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.733027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.733062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.733343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.733378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.733660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.733694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.733883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.733917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.734115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.734149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.734364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.734399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.734584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.734618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.734894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.734928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.735114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.735149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.735333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.735369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.250 [2024-12-11 10:06:37.735567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.250 [2024-12-11 10:06:37.735601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.250 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.735750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.735784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.735910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.735944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.736142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.736176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.736477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.736513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.736775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.736815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.737078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.737112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.737403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.737439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.737716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.737752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.737936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.737970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.738174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.738209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.738351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.738387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.738646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.738679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.738987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.739021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.739243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.739279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.739484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.739519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.739716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.739749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.740026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.740060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.740308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.740343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.740655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.740690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.740974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.741008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.741202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.741246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.741458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.741493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.741643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.741677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.741899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.741934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.742136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.742171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.742440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.742475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.742677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.742710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.742909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.742943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.743140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.743174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.743459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.743494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.743682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.743715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.743868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.743902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.744093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.744127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.744381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.251 [2024-12-11 10:06:37.744417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.251 qpair failed and we were unable to recover it. 00:28:28.251 [2024-12-11 10:06:37.744624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.744658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.744888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.744922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.745118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.745152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.745321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.745356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.745565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.745600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.745785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.745820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.746001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.746034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.746244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.746280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.746404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.746438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.746627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.746660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.746812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.746851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.747156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.747189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.747465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.747500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.747756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.747790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.748094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.748128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.748407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.748442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.748698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.748733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.749036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.749070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.749331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.749366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.749561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.749595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.749814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.749848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.750150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.750185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.750379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.750415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.750600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.750633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.750919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.750954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.751097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.751131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.751383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.751418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.751626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.751660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.751777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.751811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.752086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.752121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.752410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.752446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.752669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.752704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.752958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.752993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.753189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.753232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.252 [2024-12-11 10:06:37.753420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.252 [2024-12-11 10:06:37.753454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.252 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.753730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.753763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.753982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.754015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.754277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.754313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.754612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.754646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.754835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.754868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.755130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.755163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.755447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.755482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.755688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.755723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.755845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.755879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.756162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.756196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.756337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.756372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.756578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.756612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.756811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.756845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.757032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.757066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.757249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.757284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.757541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.757581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.757838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.757873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.758168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.758202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.758509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.758544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.758828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.758863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.759091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.759126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.759384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.759419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.759722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.759756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.760056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.760091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.760301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.760336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.760542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.760576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.760852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.760887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.761029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.761064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.761247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.761283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.761568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.761603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.761799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.761833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.762085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.762119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.762365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.762401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.762689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.762723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.253 qpair failed and we were unable to recover it. 00:28:28.253 [2024-12-11 10:06:37.763028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.253 [2024-12-11 10:06:37.763062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.763180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.763215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.763447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.763482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.763761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.763796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.764097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.764130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.764410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.764445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.764702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.764736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.765041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.765076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.765382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.765418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.765680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.765715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.765955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.765989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.766294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.766329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.766522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.766556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.766787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.766821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.767124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.767159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.767450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.767486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.767789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.767824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.768084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.768119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.768414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.768449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.768715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.768749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.768932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.768966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.769151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.769191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.769427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.769463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.769660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.769692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.769890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.769924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.770180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.770214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.770444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.770479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.770621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.770654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.770853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.770888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.771182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.771216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.254 [2024-12-11 10:06:37.771498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.254 [2024-12-11 10:06:37.771533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.254 qpair failed and we were unable to recover it. 00:28:28.255 [2024-12-11 10:06:37.771815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.255 [2024-12-11 10:06:37.771849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.255 qpair failed and we were unable to recover it. 00:28:28.255 [2024-12-11 10:06:37.772130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.255 [2024-12-11 10:06:37.772164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.255 qpair failed and we were unable to recover it. 00:28:28.255 [2024-12-11 10:06:37.772325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.255 [2024-12-11 10:06:37.772360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.255 qpair failed and we were unable to recover it. 00:28:28.255 [2024-12-11 10:06:37.772616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.255 [2024-12-11 10:06:37.772650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.255 qpair failed and we were unable to recover it. 00:28:28.255 [2024-12-11 10:06:37.772797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.255 [2024-12-11 10:06:37.772831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.255 qpair failed and we were unable to recover it. 00:28:28.255 [2024-12-11 10:06:37.773019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.255 [2024-12-11 10:06:37.773054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.255 qpair failed and we were unable to recover it. 00:28:28.255 [2024-12-11 10:06:37.773335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.255 [2024-12-11 10:06:37.773371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.255 qpair failed and we were unable to recover it. 00:28:28.255 [2024-12-11 10:06:37.773657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.255 [2024-12-11 10:06:37.773691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.255 qpair failed and we were unable to recover it. 00:28:28.255 [2024-12-11 10:06:37.773967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.255 [2024-12-11 10:06:37.774002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.255 qpair failed and we were unable to recover it. 00:28:28.255 [2024-12-11 10:06:37.774152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.255 [2024-12-11 10:06:37.774188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.255 qpair failed and we were unable to recover it. 00:28:28.255 [2024-12-11 10:06:37.774453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.255 [2024-12-11 10:06:37.774489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.255 qpair failed and we were unable to recover it. 00:28:28.255 [2024-12-11 10:06:37.774675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.531 [2024-12-11 10:06:37.774710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.531 qpair failed and we were unable to recover it. 00:28:28.531 [2024-12-11 10:06:37.774892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.531 [2024-12-11 10:06:37.774926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.531 qpair failed and we were unable to recover it. 00:28:28.531 [2024-12-11 10:06:37.775205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.531 [2024-12-11 10:06:37.775259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.531 qpair failed and we were unable to recover it. 00:28:28.531 [2024-12-11 10:06:37.775446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.531 [2024-12-11 10:06:37.775481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.531 qpair failed and we were unable to recover it. 00:28:28.531 [2024-12-11 10:06:37.775783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.531 [2024-12-11 10:06:37.775816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.531 qpair failed and we were unable to recover it. 00:28:28.531 [2024-12-11 10:06:37.776068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.531 [2024-12-11 10:06:37.776101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.531 qpair failed and we were unable to recover it. 00:28:28.531 [2024-12-11 10:06:37.776368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.531 [2024-12-11 10:06:37.776404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.531 qpair failed and we were unable to recover it. 00:28:28.531 [2024-12-11 10:06:37.776607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.531 [2024-12-11 10:06:37.776641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.531 qpair failed and we were unable to recover it. 00:28:28.531 [2024-12-11 10:06:37.776913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.531 [2024-12-11 10:06:37.776946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.531 qpair failed and we were unable to recover it. 00:28:28.531 [2024-12-11 10:06:37.777131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.531 [2024-12-11 10:06:37.777166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.531 qpair failed and we were unable to recover it. 00:28:28.531 [2024-12-11 10:06:37.777402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.531 [2024-12-11 10:06:37.777437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.531 qpair failed and we were unable to recover it. 00:28:28.531 [2024-12-11 10:06:37.777621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.531 [2024-12-11 10:06:37.777655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.531 qpair failed and we were unable to recover it. 00:28:28.531 [2024-12-11 10:06:37.777866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.531 [2024-12-11 10:06:37.777900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.531 qpair failed and we were unable to recover it. 00:28:28.531 [2024-12-11 10:06:37.778100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.778134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.778359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.778395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.778581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.778616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.778871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.778905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.779202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.779254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.779459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.779492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.779772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.779813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.780091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.780125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.780404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.780439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.780630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.780664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.780925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.780959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.781178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.781212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.781513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.781548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.781760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.781794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.781934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.781968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.782163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.782197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.782433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.782469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.782736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.782770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.783036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.783069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.783366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.783401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.783668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.783704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.783828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.783862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.783997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.784031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.784297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.784333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.784643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.784677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.784950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.784984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.785243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.785279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.785578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.785611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.785898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.785931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.786132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.786165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.786365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.786400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.786673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.786706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.786863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.532 [2024-12-11 10:06:37.786896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.532 qpair failed and we were unable to recover it. 00:28:28.532 [2024-12-11 10:06:37.787199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.787246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.787466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.787500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.787758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.787792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.787979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.788012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.788293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.788329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.788595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.788628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.788834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.788868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.789090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.789125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.789384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.789419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.789557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.789591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.789869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.789902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.790187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.790229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.790436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.790471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.790655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.790694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.790876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.790909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.791117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.791151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.791356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.791391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.791700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.791734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.791929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.791962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.792214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.792260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.792556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.792590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.792784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.792818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.793093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.793127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.793413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.793448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.793726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.793761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.794042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.794076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.794361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.794396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.794675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.794710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.794911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.794945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.795193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.795236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.795438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.795473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.795603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.795636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.795933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.795969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.533 [2024-12-11 10:06:37.796239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.533 [2024-12-11 10:06:37.796273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.533 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.796572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.796606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.796811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.796845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.797058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.797092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.797361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.797396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.797619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.797653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.797788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.797823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.798106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.798141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.798402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.798437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.798738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.798771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.798956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.798990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.799253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.799288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.799502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.799537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.799739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.799773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.799978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.800012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.800237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.800272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.800498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.800537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.800761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.800794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.801011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.801045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.801239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.801274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.801478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.801519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.801796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.801830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.802042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.802076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.802365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.802402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.802605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.802640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.802771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.802805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.803109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.803143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.803400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.803435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.803628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.803663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.803920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.803954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.804162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.804197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.804423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.804458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.804582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.804616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.804811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.804846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.805106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.534 [2024-12-11 10:06:37.805140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.534 qpair failed and we were unable to recover it. 00:28:28.534 [2024-12-11 10:06:37.805363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.805398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.805672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.805705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.805962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.805996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.806234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.806270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.806551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.806585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.806784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.806819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.807118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.807152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.807448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.807485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.807750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.807785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.808074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.808108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.808409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.808444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.808708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.808743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.809034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.809068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.809340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.809375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.809576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.809611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.809727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.809761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.810035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.810070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.810275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.810312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.810568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.810602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.810800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.810835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.811110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.811145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.811369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.811403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.811552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.811587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.811771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.811804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.812026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.812061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.812267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.812308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.812589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.812624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.812899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.812933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.813226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.813262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.813470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.813505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.813770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.813805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.814059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.814093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.814284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.814320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.814600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.814634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.814954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.814987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.815284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.815320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.815535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.815569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.815845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.815880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.816018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.535 [2024-12-11 10:06:37.816051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.535 qpair failed and we were unable to recover it. 00:28:28.535 [2024-12-11 10:06:37.816361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.816397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.816580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.816613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.816814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.816849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.817124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.817158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.817367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.817402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.817537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.817571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.817849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.817883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.818104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.818137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.818418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.818454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.818642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.818677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.818902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.818936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.819190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.819243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.819443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.819477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.819758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.819792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.819997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.820032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.820333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.820369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.820566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.820601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.820814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.820848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.821137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.821171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.821448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.821485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.821769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.821803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.822038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.822072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.822339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.822375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.822600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.822634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.822862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.822896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.823094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.823129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.823348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.823389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.823649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.823683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.823951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.823986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.824183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.824230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.824429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.824463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.824741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.824775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.825081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.825114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.536 [2024-12-11 10:06:37.825394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.536 [2024-12-11 10:06:37.825430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.536 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.825615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.825649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.825852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.825887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.826108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.826142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.826418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.826453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.826712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.826747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.826992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.827026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.827238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.827274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.827532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.827567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.827847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.827881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.828185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.828230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.828446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.828481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.828705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.828739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.829052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.829087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.829363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.829400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.829606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.829641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.829790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.829824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.830018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.830053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.830241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.830277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.830562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.830597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.830874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.830910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.831194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.831238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.831507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.831542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.831689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.831723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.831954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.831989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.832105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.832139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.832410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.832444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.832651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.832685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.832888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.832922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.833112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.833146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.833379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.833415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.833644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.833679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.833956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.833990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.834193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.834246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.834506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.834540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.834744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.834777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.835036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.835071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.835189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.835241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.835522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.835555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.537 [2024-12-11 10:06:37.835762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.537 [2024-12-11 10:06:37.835796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.537 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.836100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.836134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.836416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.836451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.836730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.836765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.837027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.837062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.837367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.837402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.837659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.837694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.837991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.838025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.838357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.838393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.838604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.838638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.838911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.838944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.839252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.839287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.839433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.839467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.839673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.839708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.839901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.839936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.840241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.840276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.840468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.840503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.840689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.840723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.841000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.841033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.841230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.841265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.841456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.841490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.841751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.841785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.842085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.842119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.842400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.842436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.842642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.842676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.842933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.842967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.843152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.843186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.843497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.843532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.843814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.843848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.844107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.844141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.844443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.844479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.844685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.844718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.845000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.845034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.845229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.845265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.845527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.845562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.845821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.845856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.846160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.846194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.846407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.846443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.846709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.846743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.538 [2024-12-11 10:06:37.846945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.538 [2024-12-11 10:06:37.846979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.538 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.847164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.847198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.847451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.847486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.847670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.847704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.847968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.848002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.848207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.848263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.848541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.848576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.848781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.848814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.849056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.849089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.849211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.849256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.849486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.849519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.849821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.849855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.850069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.850103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.850373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.850408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.850566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.850601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.850875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.850908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.851164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.851198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.851429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.851465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.851663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.851698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.851981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.852016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.852240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.852276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.852499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.852534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.852817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.852857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.853066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.853100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.853379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.853415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.853715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.853750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.853935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.853969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.854154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.854188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.854459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.854495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.854715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.854749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.854958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.854994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.855190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.855237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.855512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.855547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.855844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.855878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.856149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.856182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.856304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff2460 (9): Bad file descriptor 00:28:28.539 [2024-12-11 10:06:37.856675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.856758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.857068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.857106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.857414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.857451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.539 qpair failed and we were unable to recover it. 00:28:28.539 [2024-12-11 10:06:37.857655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.539 [2024-12-11 10:06:37.857689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.857968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.858003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.858291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.858327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.858605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.858640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.858891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.858926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.859205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.859251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.859447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.859482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.859763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.859797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.860081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.860115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.860347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.860382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.860689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.860726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.860982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.861017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.861318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.861352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.861551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.861585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.861782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.861816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.862022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.862057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.862311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.862346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.862483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.862516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.862770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.862804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.863013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.863048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.863332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.863368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.863582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.863617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.863885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.863920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.864212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.864257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.864457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.864496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.864779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.864809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.865010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.865042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.865252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.865284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.865492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.865523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.865745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.865777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.865972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.866003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.866146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.866179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.866452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.866484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.866637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.866669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.866821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.866852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.867063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.540 [2024-12-11 10:06:37.867096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.540 qpair failed and we were unable to recover it. 00:28:28.540 [2024-12-11 10:06:37.867278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.867312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.867451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.867483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.867609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.867641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.867847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.867878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.868098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.868128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.868261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.868294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.868408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.868440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.868627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.868657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.868863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.868896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.869116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.869148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.869274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.869308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.869441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.869473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.869612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.869644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.869830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.869862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.869997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.870029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.870138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.870175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.870392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.870424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.870549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.870580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.870768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.870799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.870931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.870963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.871159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.871193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.871329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.871366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.871627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.871660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.871915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.871945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.872152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.872181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.872380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.872415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.872661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.872693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.872906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.872937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.873091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.873123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.873380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.873448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.873718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.873758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.873967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.874001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.874184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.874215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.874468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.874502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.541 qpair failed and we were unable to recover it. 00:28:28.541 [2024-12-11 10:06:37.874642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.541 [2024-12-11 10:06:37.874672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.874876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.874908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.875051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.875082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.875239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.875271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.875487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.875518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.875639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.875671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.875888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.875919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.876170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.876201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.876414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.876456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.876615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.876646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.876929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.876960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.877215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.877259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.877582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.877613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.877908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.877939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.878064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.878095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.878306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.878338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.878461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.878491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.878789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.878820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.879123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.879154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.879440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.879474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.879724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.879755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.879897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.879929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.880228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.880261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.880419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.880450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.880712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.880744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.881058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.881090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.881289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.881321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.881472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.881503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.881756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.881787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.882064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.882094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.882354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.882386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.882539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.882569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.882772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.882803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.883095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.883126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.883409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.883440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.883691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.883723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.884005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.884037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.884318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.884350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.884575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.542 [2024-12-11 10:06:37.884607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.542 qpair failed and we were unable to recover it. 00:28:28.542 [2024-12-11 10:06:37.884801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.884833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.885107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.885138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.885330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.885362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.885635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.885667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.885803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.885834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.886053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.886084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.886301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.886333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.886587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.886618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.886903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.886934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.887244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.887282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.887508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.887540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.887759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.887790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.888002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.888033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.888292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.888324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.888457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.888487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.888745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.888775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.889028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.889059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.889360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.889392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.889602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.889633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.889859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.889890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.890084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.890115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.890339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.890371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.890627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.890658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.890971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.891003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.891286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.891319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.891571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.891603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.891871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.891903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.892019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.892049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.892240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.892273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.892552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.892583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.543 qpair failed and we were unable to recover it. 00:28:28.543 [2024-12-11 10:06:37.892847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.543 [2024-12-11 10:06:37.892877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.893177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.893208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.893412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.893445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.893632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.893664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.893920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.893951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.894142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.894174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.894462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.894495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.894775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.894806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.895064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.895096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.895322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.895356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.895550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.895580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.895860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.895892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.896092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.896124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.896357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.896389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.896683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.896714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.896939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.896971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.897154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.897185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.897446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.897479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.897760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.897791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.898072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.898109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.898397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.898429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.898558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.898589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.898727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.898758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.898907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.898938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.899159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.899190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.899457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.899491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.899709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.899739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.900019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.900049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.900303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.900335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.900598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.900629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.900815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.900845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.901050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.901080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.901284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.901316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.901531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.901562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.901834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.544 [2024-12-11 10:06:37.901865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.544 qpair failed and we were unable to recover it. 00:28:28.544 [2024-12-11 10:06:37.902084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.902115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.902319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.902351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.902649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.902679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.902887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.902917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.903124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.903155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.903375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.903407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.903633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.903663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.903871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.903902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.904103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.904135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.904424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.904457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.904740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.904773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.905032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.905064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.905273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.905304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.905424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.905455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.905637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.905670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.905949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.905980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.906191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.906234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.906419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.906452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.906585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.906618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.906836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.906868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.906998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.907030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.907213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.907270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.907409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.907441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.907726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.907758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.907960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.907993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.908262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.908299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.908504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.908536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.908739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.908770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.908956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.908986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.909131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.909162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.909379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.909412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.545 qpair failed and we were unable to recover it. 00:28:28.545 [2024-12-11 10:06:37.909628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.545 [2024-12-11 10:06:37.909662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.909800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.909832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.910024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.910056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.910246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.910280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.910532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.910563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.910752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.910785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.910982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.911014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.911304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.911338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.911546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.911578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.911837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.911870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.912124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.912155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.912460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.912492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.912716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.912747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.912973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.913005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.913295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.913327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.913603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.913635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.913830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.913861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.914117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.914149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.914476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.914509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.914701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.914733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.915008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.915045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.915254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.915287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.915554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.915586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.915790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.915822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.916087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.916118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.916248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.916281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.916425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.916461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.916743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.916776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.916957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.916988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.917178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.917210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.917447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.917479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.917683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.917715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.917942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.917974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.918234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.918268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.546 [2024-12-11 10:06:37.918506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.546 [2024-12-11 10:06:37.918538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.546 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.918722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.918754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.918995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.919026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.919330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.919363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.919649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.919680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.920000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.920033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.920237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.920270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.920461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.920493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.920652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.920684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.920900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.920932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.921120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.921151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.921352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.921386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.921609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.921641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.921870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.921902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.922096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.922129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.922316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.922353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.922490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.922522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.922730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.922762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.922956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.922988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.923185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.923246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.923370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.923401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.923594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.923625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.923967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.923999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.924201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.924245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.924431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.924465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.924738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.924770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.925029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.925066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.925344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.925377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.925663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.925695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.925956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.925988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.926140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.547 [2024-12-11 10:06:37.926171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.547 qpair failed and we were unable to recover it. 00:28:28.547 [2024-12-11 10:06:37.926364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.926397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.926546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.926578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.926803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.926835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.927121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.927154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.927301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.927333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.927582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.927614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.927812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.927844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.928027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.928059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.928241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.928273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.928578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.928610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.928759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.928791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.929012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.929044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.929237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.929271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.929474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.929506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.929763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.929794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.930084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.930115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.930332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.930367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.930576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.930608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.930936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.930969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.931188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.931242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.931501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.931533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.931759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.931790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.931979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.932010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.932276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.932309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.932514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.932546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.932801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.932833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.933108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.933140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.933418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.933451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.933690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.933721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.933878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.933910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.548 [2024-12-11 10:06:37.934198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.548 [2024-12-11 10:06:37.934239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.548 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.934448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.934479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.934682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.934713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.934930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.934962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.935166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.935196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.935352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.935390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.935533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.935564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.935845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.935876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.936058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.936090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.936372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.936405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.936616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.936647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.936851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.936882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.937073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.937104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.937371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.937403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.937612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.937644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.937777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.937808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.938067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.938098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.938369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.938402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.938618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.938649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.938954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.938986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.939186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.939245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.939441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.939472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.939667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.939699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.939924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.939954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.940208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.940247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.940383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.940415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.940637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.940668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.940821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.940852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.941133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.941164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.941430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.941461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.941641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.941672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.941971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.942003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.942211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.942251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.942445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.942476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.942623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.942655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.549 qpair failed and we were unable to recover it. 00:28:28.549 [2024-12-11 10:06:37.942938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.549 [2024-12-11 10:06:37.942969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.943231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.943263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.943541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.943572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.943777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.943808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.944082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.944113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.944370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.944403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.944622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.944653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.944860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.944891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.945170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.945201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.945353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.945385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.945639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.945675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.945878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.945909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.946089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.946120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.946375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.946408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.946617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.946649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.946946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.946977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.947171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.947203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.947460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.947492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.947698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.947730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.948028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.948059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.948362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.948396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.948581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.948612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.948817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.948848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.949128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.949159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.949434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.949466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.949600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.949631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.949835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.949866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.950141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.950172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.950452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.950484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.950720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.950752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.951047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.951077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.951349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.951382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.951641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.951673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.951978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.952008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.952276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.952309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.550 qpair failed and we were unable to recover it. 00:28:28.550 [2024-12-11 10:06:37.952538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.550 [2024-12-11 10:06:37.952569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.952773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.952804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.953004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.953035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.953317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.953350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.953550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.953581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.953786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.953817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.954100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.954132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.954326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.954357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.954610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.954641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.954840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.954872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.955065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.955096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.955292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.955325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.955604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.955636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.955847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.955877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.956146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.956178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.956462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.956500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.956777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.956808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.957061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.957091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.957295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.957328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.957535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.957567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.957852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.957882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.958089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.958121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.958351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.958383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.958599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.958630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.958816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.958847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.959028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.959059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.959277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.959309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.959501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.959532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.959648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.959679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.959914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.959946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.551 qpair failed and we were unable to recover it. 00:28:28.551 [2024-12-11 10:06:37.960155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.551 [2024-12-11 10:06:37.960185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.960379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.960412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.960689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.960719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.960872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.960903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.961179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.961210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.961365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.961397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.961678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.961709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.961988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.962019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.962308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.962341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.962458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.962490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.962686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.962717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.962994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.963024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.963339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.963371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.963632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.963664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.963776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.963807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.964078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.964109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.964310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.964343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.964480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.964511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.964791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.964822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.965004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.965035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.965314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.965347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.965605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.965636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.965895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.965926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.966237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.966271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.966532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.966563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.966758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.966796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.966998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.967029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.967311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.967344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.967538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.967569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.967820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.967851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.968059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.968090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.968350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.968382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.968521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.968551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.968746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.968778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.969061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.552 [2024-12-11 10:06:37.969093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.552 qpair failed and we were unable to recover it. 00:28:28.552 [2024-12-11 10:06:37.969372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.969404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.969686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.969718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.969928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.969959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.970140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.970171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.970446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.970480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.970663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.970695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.970961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.970992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.971138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.971169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.971386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.971420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.971602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.971633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.971912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.971943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.972231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.972264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.972570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.972601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.972789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.972820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.973025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.973055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.973335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.973367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.973626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.973657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.973987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.974019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.974296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.974328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.974618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.974649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.974924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.974955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.975148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.975179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.975556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.975627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.975933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.975970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.976216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.976264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.976570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.976604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.976877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.976910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.977136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.977167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.977408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.977442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.977627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.977660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.977907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.977949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.978098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.978130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.978315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.978350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.978579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.978610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.553 [2024-12-11 10:06:37.978918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.553 [2024-12-11 10:06:37.978951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.553 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.979214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.979256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.979453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.979485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.979692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.979724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.980000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.980032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.980238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.980271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.980575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.980608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.980890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.980923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.981135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.981167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.981321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.981354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.981664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.981696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.981998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.982031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.982304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.982338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.982623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.982654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.982904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.982936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.983247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.983281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.983537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.983569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.983870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.983903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.984052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.984084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.984313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.984347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.984457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.984489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.984769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.984801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.985063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.985095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.985349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.985384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.985641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.985673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.985969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.986002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.986293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.986327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.986601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.986634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.986923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.986955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.987173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.987204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.987346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.987378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.987580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.987612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.987738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.987770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.988051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.988082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.988366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.554 [2024-12-11 10:06:37.988400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.554 qpair failed and we were unable to recover it. 00:28:28.554 [2024-12-11 10:06:37.988606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.988638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.988901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.988939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.989126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.989157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.989363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.989396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.989603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.989636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.989841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.989873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.990150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.990181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.990420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.990453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.990684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.990716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.990832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.990862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.991205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.991259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.991539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.991572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.991867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.991899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.992169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.992200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.992501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.992533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.992797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.992828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.993036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.993068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.993263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.993297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.993497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.993530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.993809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.993841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.994115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.994147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.994466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.994500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.994689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.994720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.994990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.995022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.995273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.995306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.995505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.995537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.995818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.995849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.996103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.996134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.996453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.996487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.996681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.996713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.996954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.996986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.997252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.997286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.997564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.997596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.997833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.997865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.998080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.555 [2024-12-11 10:06:37.998111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.555 qpair failed and we were unable to recover it. 00:28:28.555 [2024-12-11 10:06:37.998339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:37.998372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:37.998622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:37.998654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:37.998915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:37.998947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:37.999198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:37.999240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:37.999499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:37.999531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:37.999742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:37.999775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.000037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.000075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.000371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.000405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.000554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.000587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.000866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.000898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.001077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.001109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.001382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.001415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.001618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.001651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.001911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.001943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.002079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.002111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.002316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.002350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.002632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.002665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.002966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.002998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.003263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.003296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.003580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.003613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.003897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.003930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.004186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.004227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.004523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.004557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.004848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.004879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.005020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.005052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.005256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.005290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.005496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.005528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.005780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.005813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.006030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.006063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.006344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.006378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.556 qpair failed and we were unable to recover it. 00:28:28.556 [2024-12-11 10:06:38.006659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.556 [2024-12-11 10:06:38.006691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.006904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.006935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.007064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.007096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.007251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.007285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.007557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.007589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.007847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.007880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.008181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.008213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.008536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.008569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.008797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.008830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.009111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.009143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.009417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.009451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.009641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.009673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.009945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.009977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.010159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.010191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.010409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.010442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.010668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.010700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.010990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.011027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.011304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.011337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.011475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.011507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.011782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.011814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.012090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.012122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.012414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.012448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.012596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.012627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.012832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.012864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.013075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.013107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.013286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.013320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.013528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.013561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.013783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.013815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.014093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.014126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.014342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.014376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.014579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.014612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.014831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.014863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.015140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.015172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.015325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.015358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.557 qpair failed and we were unable to recover it. 00:28:28.557 [2024-12-11 10:06:38.015584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.557 [2024-12-11 10:06:38.015616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.015752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.015784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.016058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.016091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.016373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.016407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.016668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.016700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.017008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.017040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.017306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.017341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.017576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.017608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.017878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.017911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.018127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.018160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.018359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.018392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.018669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.018701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.018972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.019005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.019150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.019182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.019466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.019500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.019801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.019834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.020100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.020132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.020404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.020438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.020645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.020677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.020920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.020953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.021068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.021099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.021290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.021323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.021606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.021644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.021920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.021953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.022152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.022184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.022456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.022489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.022694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.022726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.022988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.023020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.023250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.023283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.023481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.023514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.023717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.023748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.024035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.024067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.024375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.024409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.024686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.024718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.558 [2024-12-11 10:06:38.025024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.558 [2024-12-11 10:06:38.025056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.558 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.025322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.025357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.025571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.025603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.025801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.025833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.026133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.026164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.026393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.026427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.026704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.026736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.027019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.027052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.027250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.027283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.027567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.027600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.027919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.027951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.028135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.028167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.028373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.028406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.028661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.028694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.028876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.028907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.029115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.029148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.029334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.029368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.029640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.029671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.029871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.029903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.030102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.030134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.030317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.030351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.030557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.030589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.030814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.030846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.031126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.031158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.031449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.031482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.031697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.031730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.031921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.031954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.032138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.032170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.032374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.032413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.032694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.032727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.032998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.033030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.033302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.033336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.033631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.033664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.033967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.033999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.034266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.559 [2024-12-11 10:06:38.034300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.559 qpair failed and we were unable to recover it. 00:28:28.559 [2024-12-11 10:06:38.034524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.034556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.034793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.034825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.035101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.035133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.035389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.035423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.035705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.035737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.035930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.035962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.036592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.036625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.036911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.036944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.037247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.037281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.037518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.037552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.037870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.037902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.038104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.038136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.038345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.038379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.038689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.038721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.038997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.039030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.039213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.039256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.039468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.039500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.039646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.039678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.039860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.039893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.040122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.040155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.040484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.040563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.040847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.040884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.041199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.041246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.041430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.041462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.041665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.041698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.041893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.041925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.042238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.042272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.042561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.042594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.042798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.042829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.043025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.043057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.043331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.043365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.043575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.043608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.043879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.043910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.044093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.044126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.044352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.560 [2024-12-11 10:06:38.044385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.560 qpair failed and we were unable to recover it. 00:28:28.560 [2024-12-11 10:06:38.044650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.044683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.044888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.044921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.045205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.045246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.045523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.045555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.045837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.045869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.046096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.046128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.046382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.046416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.046638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.046670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.046942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.046974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.047171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.047203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.047412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.047444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.047644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.047676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.047879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.047918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.048191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.048232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.048448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.048480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.048738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.048770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.048970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.049001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.049200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.049243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.049503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.049536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.049827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.049859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.050061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.050093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.050378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.050411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.050608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.050640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.050851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.050883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.051151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.051183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.051376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.051409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.051676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.051709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.561 qpair failed and we were unable to recover it. 00:28:28.561 [2024-12-11 10:06:38.051924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.561 [2024-12-11 10:06:38.051955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.052107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.052138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.052330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.052362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.052613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.052645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.052850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.052880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.053069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.053101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.053308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.053341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.053606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.053639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.053847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.053879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.054158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.054189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.054494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.054527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.054712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.054745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.055021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.055053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.055383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.055418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.055616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.055648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.055849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.055881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.056155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.056188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.056453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.056486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.056760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.056793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.057070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.057101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.057354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.057389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.057650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.057682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.057800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.057832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.058110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.058143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.058398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.058431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.058722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.058755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.058976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.059014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.059270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.059303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.059506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.059538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.059729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.059761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.059954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.059986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.060197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.060240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.060392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.060425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.060697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.060728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.562 [2024-12-11 10:06:38.060989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.562 [2024-12-11 10:06:38.061020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.562 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.061298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.061333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.061620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.061651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.061858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.061890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.062083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.062116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.062368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.062402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.062689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.062722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.063012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.063045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.063250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.063283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.063418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.063450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.063645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.063677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.063896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.063927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.064151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.064184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.064422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.064454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.064709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.064741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.064999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.065031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.065243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.065277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.065481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.065514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.065735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.065767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.065915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.065953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.066237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.066271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.066464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.066496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.066800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.066833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.066969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.067001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.067283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.067316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.067441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.067472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.067676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.067707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.067981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.068012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.068230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.068263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.068544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.068576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.068713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.068744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.068962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.068993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.069175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.069206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.069433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.069467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.069656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.069688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.563 qpair failed and we were unable to recover it. 00:28:28.563 [2024-12-11 10:06:38.069898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.563 [2024-12-11 10:06:38.069930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.070191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.070234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.070529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.070562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.070824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.070856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.070989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.071021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.071153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.071184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.071395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.071428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.071612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.071644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.071922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.071954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.072206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.072252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.072560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.072591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.072901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.072932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.073240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.073274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.073412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.073445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.073671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.073704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.074006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.074037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.074300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.074333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.074539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.074572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.074705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.074736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.075010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.075042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.075254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.075288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.075441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.075473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.075658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.075689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.075971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.076003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.076198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.076240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.076424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.076462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.076746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.076778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.077062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.077093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.077373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.077406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.077626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.077658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.077840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.077872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.078129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.078161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.078457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.078491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.078696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.078728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.079010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.079042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.564 qpair failed and we were unable to recover it. 00:28:28.564 [2024-12-11 10:06:38.079325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.564 [2024-12-11 10:06:38.079359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.079556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.079588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.079768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.079799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.080078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.080111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.080336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.080371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.080652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.080684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.080967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.080998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.081281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.081315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.081515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.081546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.081852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.081885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.082170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.082202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.082485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.082517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.082801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.082833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.083116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.083148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.083427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.083460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.083754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.083806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.084010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.084042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.084268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.084310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.084600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.084639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.084901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.084933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.085215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.085259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.085534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.085566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.085851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.085897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.086102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.086139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.086427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.086461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.086735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.086777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.087059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.087090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.087364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.087398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.087595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.087628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.087834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.087866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.088129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.088176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.088500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.565 [2024-12-11 10:06:38.088537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.565 qpair failed and we were unable to recover it. 00:28:28.565 [2024-12-11 10:06:38.088751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.088783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.089011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.089044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.089345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.089377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.089514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.089546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.089751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.089798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.090029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.090072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.090246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.090294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.090534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.090582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.090891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.090939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.091253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.091304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.091634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.091683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.091992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.092041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.092346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.092396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.092732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.092782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.093027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.093068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.093349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.093391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.093603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.093636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.093898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.093929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.094182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.094214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.845 qpair failed and we were unable to recover it. 00:28:28.845 [2024-12-11 10:06:38.094535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.845 [2024-12-11 10:06:38.094569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.094770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.094803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.094987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.095020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.095239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.095274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.095552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.095584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.095735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.095768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.096026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.096059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.096265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.096312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.096500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.096533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.096794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.096827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.097128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.097160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.097430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.097464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.097654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.097687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.097945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.097977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.098124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.098156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.098477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.098511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.098718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.098749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.098931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.098962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.099250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.099282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.099548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.099580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.099787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.099826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.100026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.100057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.100267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.100300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.100434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.100466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.100700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.100732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.101006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.101038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.101236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.101269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.101548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.101579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.101765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.101796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.102012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.102043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.102328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.102362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.102564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.102596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.102778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.102810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.102995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.103025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.103240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.103278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.846 qpair failed and we were unable to recover it. 00:28:28.846 [2024-12-11 10:06:38.103559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.846 [2024-12-11 10:06:38.103591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.103797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.103829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.104083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.104114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.104390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.104426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.104718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.104750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.104958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.104990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.105293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.105326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.105550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.105582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.105785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.105816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.105999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.106031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.106309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.106342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.106526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.106557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.106833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.106865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.107156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.107189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.107396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.107430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.107713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.107745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.108029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.108062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.108344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.108377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.108655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.108687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.108898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.108931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.109192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.109233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.109531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.109563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.109832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.109864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.110053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.110085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.110363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.110397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.110686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.110719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.110926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.110958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.111273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.111307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.111599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.111630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.111855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.111888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.112097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.112130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.112323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.112378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.112570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.112603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.112786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.112819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.113008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.113040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.113278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.847 [2024-12-11 10:06:38.113311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.847 qpair failed and we were unable to recover it. 00:28:28.847 [2024-12-11 10:06:38.113516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.113547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.113828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.113860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.114056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.114088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.114346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.114378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.114592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.114631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.114759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.114790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.115006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.115038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.115244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.115276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.115488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.115520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.115647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.115678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.115961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.115995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.116195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.116238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.116387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.116418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.116641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.116674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.116948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.116980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.117159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.117190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.117475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.117509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.117810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.117842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.118123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.118157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.118494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.118528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.118667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.118700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.118957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.118990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.119190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.119236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.119496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.119529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.119674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.119706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.119853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.119884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.120161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.120192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.120335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.120367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.120571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.120602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.120858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.120889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.121071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.121102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.121249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.121284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.121569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.121602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.121786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.121818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.122072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.122104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.122239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.122273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.122392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.122423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.848 [2024-12-11 10:06:38.122635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.848 [2024-12-11 10:06:38.122666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.848 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.122870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.122902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.123096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.123128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.123340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.123374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.123579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.123611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.123767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.123799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.124005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.124038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.124232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.124264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.124536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.124614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.124967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.125039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.125345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.125386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.125596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.125629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.125825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.125858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.126065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.126099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.126289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.126324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.126534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.126567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.126723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.126755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.127040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.127072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.127262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.127297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.127564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.127595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.127876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.127909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.128195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.128251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.128545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.128577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.128852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.128884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.129103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.129136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.129395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.129428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.129644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.129676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.129929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.129962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.130238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.130271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.130555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.130588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.130841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.130874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.131178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.131210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.131410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.131442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.131640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.131672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.131957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.131989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.132188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.132231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.132511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.132543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.132821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.132852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.133066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.133098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.849 [2024-12-11 10:06:38.133287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.849 [2024-12-11 10:06:38.133321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.849 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.133585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.133618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.133753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.133784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.134034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.134066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.134256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.134289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.134570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.134603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.134873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.134907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.135199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.135241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.135465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.135497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.135716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.135754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.135891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.135924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.136056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.136088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.136361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.136394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.136686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.136718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.137026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.137058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.137270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.137303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.137592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.137624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.137924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.137955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.138138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.138171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.138465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.138498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.138774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.138806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.139091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.139123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.139340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.139374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.139588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.139620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.139874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.139906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.140166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.140198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.140405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.140439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.140650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.140682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.140960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.140993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.141273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.141307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.141492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.141524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.141727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.141760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.141896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.141929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.142154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.850 [2024-12-11 10:06:38.142187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.850 qpair failed and we were unable to recover it. 00:28:28.850 [2024-12-11 10:06:38.142444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.142479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.142674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.142707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.142914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.142947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.143136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.143168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.143385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.143420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.143563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.143594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.143792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.143824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.143964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.143997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.144274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.144308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.144498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.144531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.144787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.144820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.145095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.145126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.145249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.145283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.145563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.145596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.145801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.145834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.146043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.146082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.146273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.146308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.146507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.146539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.146724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.146756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.146952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.146985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.147169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.147201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.147488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.147522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.147724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.147757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.147906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.147939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.148233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.148267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.148544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.148576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.148721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.148754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.148891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.148924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.149155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.149186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.149464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.149498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.149625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.149657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.149909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.149942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.150194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.150237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.150451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.150485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.150684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.150717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.150860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.150893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.151083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.151116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.151332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.151365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.151592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.851 [2024-12-11 10:06:38.151624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.851 qpair failed and we were unable to recover it. 00:28:28.851 [2024-12-11 10:06:38.151842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.151875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.152076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.152107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.152319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.152354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.152511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.152552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.152811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.152844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.152986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.153017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.153244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.153277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.153482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.153514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.153697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.153730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.153939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.153971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.154096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.154128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.154267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.154299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.154458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.154490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.154754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.154786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.155040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.155073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.155232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.155267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.155411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.155450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.155646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.155678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.155958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.155990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.156198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.156252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.156487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.156518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.156722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.156754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.157018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.157051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.157251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.157284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.157510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.157541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.157750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.157781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.157989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.158021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.158157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.158189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.158453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.158485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.158705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.158737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.158941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.158973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.159115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.159146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.159331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.159364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.159506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.159538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.159792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.159824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.160022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.160054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.160250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.160283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.160399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.160430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.160616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.852 [2024-12-11 10:06:38.160648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.852 qpair failed and we were unable to recover it. 00:28:28.852 [2024-12-11 10:06:38.160853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.160885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.161027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.161059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.161266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.161299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.161514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.161546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.161855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.161887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.162108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.162140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.162401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.162434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.162620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.162651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.162832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.162864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.163066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.163098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.163374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.163407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.163544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.163575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.163722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.163752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.164006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.164038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.164341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.164373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.164659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.164691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.164965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.164996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.165287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.165332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.165484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.165516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.165704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.165735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.165918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.165950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.166130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.166162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.166425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.166457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.166759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.166791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.167059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.167091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.167288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.167320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.167598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.167629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.167832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.167863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.168059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.168090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.168289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.168322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.168558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.168590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.168873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.168905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.169086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.169118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.169386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.169420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.169697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.169729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.170009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.170041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.170309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.170341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.170633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.170665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.170847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.170878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.853 qpair failed and we were unable to recover it. 00:28:28.853 [2024-12-11 10:06:38.171139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.853 [2024-12-11 10:06:38.171170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.171433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.171465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.171656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.171688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.171900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.171933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.172113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.172144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.172419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.172452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.172731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.172764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.172990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.173021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.173245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.173278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.173537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.173569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.173817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.173849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.173984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.174015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.174201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.174242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.174521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.174552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.174777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.174808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.175094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.175127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.175403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.175436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.175636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.175667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.175944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.175982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.176261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.176294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.176577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.176608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.176921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.176952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.177167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.177198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.177409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.177442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.177719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.177749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.178037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.178069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.178270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.178303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.178570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.178602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.178905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.178937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.179190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.179230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.179435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.179467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.179729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.179761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.179968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.180000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.180269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.180301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.180500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.854 [2024-12-11 10:06:38.180531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.854 qpair failed and we were unable to recover it. 00:28:28.854 [2024-12-11 10:06:38.180803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.180835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.181124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.181156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.181446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.181479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.181683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.181714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.181855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.181886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.182143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.182174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.182461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.182493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.182701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.182733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.182988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.183019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.183320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.183352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.183567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.183599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.183850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.183881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.184183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.184214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.184525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.184557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.184753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.184786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.185044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.185076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.185331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.185364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.185567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.185598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.185879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.185910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.186169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.186201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.186500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.186532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.186851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.186882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.187085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.187117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.187369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.187407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.187590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.187622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.187856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.187888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.188014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.188045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.188241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.188274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.188539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.188572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.188843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.188875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.189153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.189187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.189507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.189578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.189870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.189908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.190149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.190182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.190490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.190524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.190726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.190759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.190958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.190990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.191201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.191245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.191523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.855 [2024-12-11 10:06:38.191555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.855 qpair failed and we were unable to recover it. 00:28:28.855 [2024-12-11 10:06:38.191858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.191891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.192157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.192189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.192367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.192401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.192599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.192630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.192874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.192905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.193136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.193167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.193321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.193354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.193616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.193648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.193789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.193820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.194075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.194107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.194384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.194418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.194701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.194733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.195017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.195049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.195339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.195372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.195599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.195630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.195883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.195915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.196104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.196136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.196338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.196371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.196645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.196677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.196936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.196968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.197236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.197270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.197490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.197521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.197731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.197762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.198033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.198065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.198259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.198299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.198605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.198637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.198931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.198962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.199240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.199273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.199564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.199596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.199776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.199807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.200077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.200109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.200305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.200338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.200598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.200629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.200787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.200819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.201042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.201075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.201297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.201330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.201581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.201613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.201913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.201945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.202165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.856 [2024-12-11 10:06:38.202197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.856 qpair failed and we were unable to recover it. 00:28:28.856 [2024-12-11 10:06:38.202462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.202495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.202725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.202757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.202952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.202983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.203241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.203275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.203489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.203521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.203776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.203808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.204106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.204137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.204366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.204398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.204554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.204586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.204795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.204827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.205047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.205079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.205275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.205308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.205596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.205629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.205845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.205877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.206099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.206130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.206326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.206359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.206547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.206580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.206832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.206864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.207064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.207096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.207241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.207276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 250287 Killed "${NVMF_APP[@]}" "$@" 00:28:28.857 [2024-12-11 10:06:38.207571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.207603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.207810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.207842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.208045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.208077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:28.857 [2024-12-11 10:06:38.208337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.208370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.208591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.208630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.208861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.208893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:28.857 [2024-12-11 10:06:38.209075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.209107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:28.857 [2024-12-11 10:06:38.209381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.209415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.857 [2024-12-11 10:06:38.209624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.209657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.209844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.209876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.210153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.210185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.210376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.210425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.210641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.210675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.210957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.210990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.211250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.857 [2024-12-11 10:06:38.211283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.857 qpair failed and we were unable to recover it. 00:28:28.857 [2024-12-11 10:06:38.211495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.211527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.211860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.211937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.212195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.212247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.212491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.212524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.212829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.212862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.213096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.213128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.213457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.213491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.213689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.213723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.213987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.214022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.214331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.214362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.214568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.214599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.214918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.214952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.215238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.215271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.215478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.215509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.215648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.215687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.215911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.215944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.216177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.216209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.216417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.216450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.216701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.216733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.216868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.216900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=250994 00:28:28.858 [2024-12-11 10:06:38.217177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.217214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 250994 00:28:28.858 [2024-12-11 10:06:38.217418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.217455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:28.858 [2024-12-11 10:06:38.217741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.217775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 250994 ']' 00:28:28.858 [2024-12-11 10:06:38.218054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.218087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.858 [2024-12-11 10:06:38.218305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.218340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.858 [2024-12-11 10:06:38.218550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.218585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.858 [2024-12-11 10:06:38.218893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.218928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.858 [2024-12-11 10:06:38.219199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.219260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.858 [2024-12-11 10:06:38.219549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.219589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.219786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.219822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.220081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.220113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.220319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.220351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.220545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.220577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.858 qpair failed and we were unable to recover it. 00:28:28.858 [2024-12-11 10:06:38.220727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.858 [2024-12-11 10:06:38.220760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.220989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.221022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.221298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.221337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.221534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.221573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.221728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.221761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.222092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.222125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.222344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.222377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.222593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.222629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.222835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.222868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.222991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.223024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.223226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.223259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.223514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.223547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.223805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.223838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.224096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.224129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.224371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.224405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.224686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.224719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.225028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.225060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.225215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.225263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.225461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.225493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.225654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.225686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.225915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.225948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.226068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.226100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.226393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.226427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.226625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.226657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.226932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.226965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.227272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.227305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.227452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.227484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.227740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.227773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.227969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.859 [2024-12-11 10:06:38.228002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.859 qpair failed and we were unable to recover it. 00:28:28.859 [2024-12-11 10:06:38.228235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.228270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.228490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.228524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.228778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.228810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.229071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.229107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.229302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.229335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.229472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.229504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.229753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.229785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.230077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.230110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.230406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.230439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.230580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.230613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.230838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.230870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.231132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.231169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.231319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.231354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.231610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.231646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.231857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.231896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.232031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.232066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.232285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.232318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.232528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.232560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.232782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.232814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.233073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.233105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.233296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.233328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.233584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.233618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.233927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.233959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.234182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.234215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.234493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.234525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.234671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.234707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.234843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.234875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.235021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.235053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.235344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.235379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.235573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.235605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.235880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.235913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.236182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.236213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.236379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.860 [2024-12-11 10:06:38.236411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.860 qpair failed and we were unable to recover it. 00:28:28.860 [2024-12-11 10:06:38.236615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.236646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.236948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.236980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.237245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.237278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.237466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.237497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.237775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.237806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.238007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.238038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.238241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.238277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.238460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.238494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.238668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.238703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.238936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.238970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.239156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.239189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.239401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.239433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.239635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.239667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.239945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.239976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.240237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.240270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.240454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.240487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.240699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.240732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.240940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.240972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.241187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.241229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.241359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.241391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.241606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.241637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.241783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.241821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.242027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.242059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.242265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.242297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.242443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.242476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.242698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.242730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.243010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.243041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.243246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.243279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.243415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.243446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.243682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.243713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.243982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.244013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.244261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.244294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.244553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.244585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.244782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.244814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.861 [2024-12-11 10:06:38.245018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.861 [2024-12-11 10:06:38.245050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.861 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.245288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.245321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.245455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.245487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.245765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.245796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.245950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.245983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.246117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.246148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.246305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.246337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.246557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.246589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.246841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.246872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.247055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.247087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.247282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.247315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.247513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.247545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.247661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.247692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.247952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.247985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.248118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.248151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.248285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.248319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.248457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.248486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.248673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.248704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.248995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.249028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.249188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.249227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.249429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.249461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.249733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.249763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.249904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.249936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.250131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.250164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.250369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.250402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.250624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.250656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.250790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.250822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.250949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.250988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.251138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.251170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.251375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.251407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.251539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.251569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.251776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.251808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.251929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.251960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.252141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.252172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.252428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.862 [2024-12-11 10:06:38.252463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.862 qpair failed and we were unable to recover it. 00:28:28.862 [2024-12-11 10:06:38.252657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.252688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.252884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.252915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.253074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.253107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.253329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.253362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.253549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.253581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.253779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.253811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.254079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.254112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.254242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.254273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.254541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.254574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.254730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.254761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.254891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.254922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.255056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.255090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.255313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.255348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.255567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.255604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.255836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.255869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.256080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.256113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.256417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.256450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.256660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.256692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.256824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.256856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.257064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.257097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.257285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.257318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.257592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.257623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.257759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.257792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.257931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.257965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.258154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.258186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.258338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.258369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.258561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.258593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.258778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.258810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.259005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.259037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.259246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.259279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.259558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.259590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.259705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.259736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.259863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.259906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.260035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.260066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.863 qpair failed and we were unable to recover it. 00:28:28.863 [2024-12-11 10:06:38.260290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.863 [2024-12-11 10:06:38.260324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.260601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.260632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.260762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.260793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.260981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.261013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.261147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.261179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.261333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.261369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.261565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.261599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.261734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.261766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.265243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.265305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.265638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.265672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.265807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.265838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.266017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.266047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.266274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.266307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.266517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.266549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.266804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.266835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.267023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.267054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.267332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.267364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.267560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.267591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.267790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.267821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.268022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.268053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.268173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.268203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.268396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.268427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.268618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.268649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.268854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.268885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.269143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.269174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.269402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.269434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.269556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.269586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.269862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.269893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.270098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.270121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.864 [2024-12-11 10:06:38.270354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.864 [2024-12-11 10:06:38.270378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.864 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.270551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.270574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.270686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.270708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.270879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.270902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.271014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.271036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.271210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.271240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.271313] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:28:28.865 [2024-12-11 10:06:38.271388] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.865 [2024-12-11 10:06:38.271435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.271460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.271633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.271654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.271765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.271791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.271973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.271996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.272095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.272119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.272318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.272344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.272525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.272563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.272768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.272801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.272926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.272955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.273064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.273093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.273304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.273336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.273529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.273561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.273756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.273787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.273974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.274011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.274196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.274235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.274486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.274517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.274706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.274738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.274928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.274958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.275092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.275122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.275402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.275434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.275560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.275590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.275711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.275742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.276027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.276069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.276200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.276238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.276347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.276375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.276505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.276533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.276791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.276837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.277016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.277043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.277322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.865 [2024-12-11 10:06:38.277353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.865 qpair failed and we were unable to recover it. 00:28:28.865 [2024-12-11 10:06:38.277477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.277504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.277676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.277703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.277919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.277946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.278137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.278165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.278319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.278349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.278484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.278510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.278649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.278676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.278943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.278970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.279159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.279187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.279316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.279344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.279613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.279646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.279771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.279796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.279896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.279920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.280046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.280076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.280256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.280298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.280377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.280395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.280614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.280630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.280795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.280812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.280971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.280989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.281102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.281119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.281230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.281247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.281393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.281410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.281509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.281525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.281612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.281629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.281714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.281731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.281907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.281923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.282093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.282110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.282199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.282228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.282399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.282417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.282614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.282631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.282730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.282747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.866 qpair failed and we were unable to recover it. 00:28:28.866 [2024-12-11 10:06:38.282923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.866 [2024-12-11 10:06:38.282940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.283126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.283144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.283303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.283321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.283559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.283576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.283744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.283762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.283852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.283869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.283967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.283984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.284162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.284179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.284257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.284275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.284374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.284392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.284612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.284630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.284842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.284859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.285097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.285115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.285222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.285241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.285345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.285364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.285459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.285477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.285568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.285586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.285743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.285761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.285921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.285937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.286105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.286122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.286291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.286308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.286455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.286473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.286710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.286730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.286827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.286845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.286944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.286962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.287129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.287146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.287328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.287346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.287505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.287523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.287683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.287700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.287883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.287900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.287997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.288014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.867 qpair failed and we were unable to recover it. 00:28:28.867 [2024-12-11 10:06:38.288244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.867 [2024-12-11 10:06:38.288262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.288360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.288378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.288525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.288542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.288632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.288649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.288731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.288749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.288918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.288935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.289019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.289036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.289182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.289200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.289399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.289417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.289580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.289597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.289696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.289713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.289884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.289902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.290060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.290077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.290181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.290207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.290394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.290424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.290540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.290567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.290678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.290715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.290889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.290916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.291138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.291165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.291286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.291315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.291513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.291545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.291683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.291710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.291969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.291999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.292268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.292308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.292543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.292573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.292767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.292796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.292985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.293016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.293225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.293259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.293379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.293406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.293589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.293617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.293872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.293909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.294025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.294054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.294250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.294272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.294461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.294482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.868 qpair failed and we were unable to recover it. 00:28:28.868 [2024-12-11 10:06:38.294663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.868 [2024-12-11 10:06:38.294683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.294905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.294926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.295080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.295100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.295188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.295208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.295405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.295426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.295695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.295716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.295884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.295904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.296150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.296170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.296281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.296303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.296527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.296548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.296713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.296732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.296915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.296936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.297116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.297135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.297293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.297316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.297476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.297496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.297682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.297702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.297869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.297889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.298046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.298067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.298174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.298194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.298373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.298394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.298510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.298530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.298623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.298643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.298832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.298852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.298957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.298978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.299095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.299116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.299281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.299302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.299458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.299479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.299579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.299600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.299698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.299719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.299833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.869 [2024-12-11 10:06:38.299854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.869 qpair failed and we were unable to recover it. 00:28:28.869 [2024-12-11 10:06:38.300025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.300046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.300234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.300256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.300433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.300453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.300548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.300568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.300727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.300747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.300929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.300954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.301133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.301158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.301272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.301303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.301490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.301516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.301784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.301808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.301909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.301935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.302052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.302077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.302194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.302230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.302345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.302371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.302630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.302656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.302913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.302938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.303119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.303145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.303359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.303389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.303516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.303541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.303718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.303744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.303857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.303882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.304128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.304154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.304277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.304304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.304509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.304535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.304643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.304668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.304776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.304801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.305035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.305061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.305243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.305270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.305448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.305474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.305580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.305607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.305767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.305793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.305999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.306025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.306265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.306292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.306584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.306610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.306861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.870 [2024-12-11 10:06:38.306886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.870 qpair failed and we were unable to recover it. 00:28:28.870 [2024-12-11 10:06:38.307017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.307042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.307297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.307324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.307500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.307525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.307646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.307671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.307842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.307867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.308026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.308051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.308166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.308191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.308305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.308331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.308436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.308461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.308560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.308585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.308812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.308838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.308953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.308977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.309091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.309121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.309303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.309330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.309501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.309525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.309650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.309675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.309833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.309858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.309958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.309983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.310090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.310114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.310273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.310299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.310461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.310487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.310603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.310633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.310761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.310790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.310962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.310997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.311238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.311269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.311398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.311427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.311614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.311643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.311863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.311892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.312062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.312091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.312206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.312267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.312459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.312489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.312687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.312716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.312845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.312874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.313139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.313169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.313369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.313399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.313587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.871 [2024-12-11 10:06:38.313617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.871 qpair failed and we were unable to recover it. 00:28:28.871 [2024-12-11 10:06:38.313816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.313846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.314039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.314067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.314170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.314199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.314428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.314495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.314697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.314733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.314942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.314975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.315161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.315194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.315381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.315420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.315640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.315671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.315800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.315831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.315976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.316006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.316130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.316162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.316278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.316310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.316432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.316464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.316602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.316632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.316912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.316943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.317136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.317177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.317334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.317367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.317612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.317643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.317764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.317795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.317901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.317933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.318133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.318164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.318320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.318353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.318483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.318515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.318642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.318672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.318864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.318896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.319082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.319114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.319239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.319272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.319406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.319438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.319618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.319649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.319787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.319819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.319953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.319984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.320238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.320272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.872 [2024-12-11 10:06:38.320457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.872 [2024-12-11 10:06:38.320488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.872 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.320697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.320729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.320918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.320951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.321085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.321116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.321310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.321344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.321613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.321644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.321856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.321888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.322074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.322106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.322301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.322334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.322464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.322496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.322734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.322806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.323048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.323082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.323287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.323323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.323513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.323546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.323730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.323761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.324009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.324041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.324248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.324282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.324423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.324456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.324679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.324710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.324838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.324869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.325113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.325144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.325337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.325369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.325574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.325606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.325788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.325833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.325941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.325972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.326084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.326114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.326309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.326343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.326544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.326575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.326816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.326848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.327056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.327087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.327306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.327340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.327465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.327514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.327783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.327815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.328041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.873 [2024-12-11 10:06:38.328072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.873 qpair failed and we were unable to recover it. 00:28:28.873 [2024-12-11 10:06:38.328206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.328254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.328465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.328498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.328704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.328735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.329021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.329052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.329242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.329275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.329412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.329444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.329617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.329648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.329772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.329802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.329998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.330030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.330148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.330179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.330362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.330394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.330516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.330548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.330810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.330840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.331109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.331140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.331394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.331428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.331609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.331640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.331951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.332021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.332270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.332320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.332583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.332616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.332801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.332832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.333006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.333039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.333242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.333275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.333501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.333532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.333655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.333691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.333883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.333916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.334105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.334135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.334321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.334354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.334489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.334521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.334728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.334759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.334923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.874 [2024-12-11 10:06:38.334962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.874 qpair failed and we were unable to recover it. 00:28:28.874 [2024-12-11 10:06:38.335094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.335125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.335267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.335299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.335429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.335461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.335569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.335601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.335783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.335817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.336064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.336096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.336364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.336398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.336610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.336642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.336884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.336918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.337103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.337135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.337324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.337358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.337550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.337582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.337711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.337742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.338004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.338037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.338279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.338313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.338442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.338474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.338666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.338699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.338874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.338906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.339090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.339121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.339335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.339369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.339506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.339538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.339778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.339809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.340082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.340114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.340391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.340424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.340610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.340641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.340819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.340851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.341045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.341094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.341355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.341392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.341594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.341626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.341884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.341917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.342115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.342147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.342353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.342387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.342608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.875 [2024-12-11 10:06:38.342640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.875 qpair failed and we were unable to recover it. 00:28:28.875 [2024-12-11 10:06:38.342822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.342853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.342982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.343014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.343141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.343171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.343287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.343317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.343592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.343625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.343827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.343857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.344040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.344071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.344363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.344397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.344663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.344694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.344812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.344842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.344981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.345012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.345188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.345229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.345414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.345445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.345581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.345613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.345729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.345760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.345884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.345915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.346107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.346138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.346315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.346349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.346553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.346585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.346728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.346760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.346949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.346987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.347170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.347201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.347458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.347490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.347760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.347792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.347986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.348017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.348141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.348172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.348401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.348436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.348631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.348662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.348937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.348968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.349255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.349288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.349394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.349425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.349550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.349582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.349827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.349859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.350128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.876 [2024-12-11 10:06:38.350158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.876 qpair failed and we were unable to recover it. 00:28:28.876 [2024-12-11 10:06:38.350384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.350418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.350526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.350556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.350678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.350709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.350908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.350939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.351072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.351103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.351315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.351348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.351481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.351513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.351641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.351672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.351873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.351905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.352107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.352139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.352265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.352298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.352421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.352451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.352655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.352687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.352827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.352858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.353001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.353033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.353229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.353262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.353393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.353424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.353616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.353647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.353831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.353863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.354052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.354083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.354267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.354299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.354430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.354462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.354650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.354681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.354942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.354974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.355161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.355192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.355380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.355412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.355605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.355637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.355815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.355861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.355999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.356031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.356167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.356198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.356502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.356536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.356749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.356780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.357059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.357091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.357372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.877 [2024-12-11 10:06:38.357405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.877 qpair failed and we were unable to recover it. 00:28:28.877 [2024-12-11 10:06:38.357576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.357607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.357872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.357904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.358091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.358128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.358343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.358376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.358503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.358534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.358670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.358701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.358915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.358946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.359130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.359161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.359349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.359381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.359503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.359536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.359715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.359746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.359942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.359973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.360087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.360119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.360303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.360337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.360528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.360559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.360727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.360758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.360885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.360916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.361113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.361144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.361350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.361384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.361512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.361544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.361656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.361699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.361889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.361921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.362164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.362196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.362413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.362445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.362616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.362647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.362833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.362863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.363002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.363034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.363311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.363350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.363540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.363571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.363718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.363749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.364034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.364066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.364327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.364359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.364624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.364655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.878 qpair failed and we were unable to recover it. 00:28:28.878 [2024-12-11 10:06:38.364857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.878 [2024-12-11 10:06:38.364889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.365030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.365087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.365239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.365275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.365462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.365495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.365684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.365715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.365957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.365989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.366111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.366142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.366428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.366461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.366643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.366674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.366936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:28.879 [2024-12-11 10:06:38.366947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.366977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.367082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.367114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.367404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.367438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.367572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.367603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.367735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.367766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.367973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.368005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.368181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.368212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.368411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.368443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.368697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.368729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.368973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.369004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.369186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.369226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.369515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.369547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.369756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.369793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.370000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.370031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.370155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.370187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.370325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.370362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.370551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.370583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.370875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.370906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.371083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.371124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.371387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.371421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.371603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.879 [2024-12-11 10:06:38.371635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.879 qpair failed and we were unable to recover it. 00:28:28.879 [2024-12-11 10:06:38.371819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.371850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.372118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.372151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.372361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.372395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.372585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.372617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.372844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.372877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.373149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.373182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.373465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.373499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.373619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.373652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.373794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.373829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.374103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.374135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.374276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.374309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.374436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.374469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.374642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.374674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.374941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.374974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.375168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.375200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.375455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.375489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.375774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.375807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.376002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.376035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.376232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.376266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.376505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.376537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.376737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.376770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.376903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.376935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.377064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.377096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.377347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.377382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.377668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.377711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.377895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.377937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.378147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.378179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.378497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.378532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.378721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.378753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.379020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.379051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.379181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.379211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.379471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.379503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.880 qpair failed and we were unable to recover it. 00:28:28.880 [2024-12-11 10:06:38.379682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.880 [2024-12-11 10:06:38.379713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.379978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.380009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.380198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.380248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.380496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.380529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.380741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.380780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.380991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.381029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.381240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.381273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.381572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.381604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.381818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.381850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.382057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.382090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.382229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.382263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.382399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.382430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.382607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.382639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.382833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.382865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.383104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.383137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.383277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.383309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.383440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.383471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.383708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.383741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.383917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.383950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.384145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.384178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.384305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.384338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.384604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.384636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.384765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.384798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.384980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.385011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.385317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.385351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.385598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.385630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.385845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.385877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.386155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.386188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.386387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.386436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.386639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.386680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.386835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.386867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.387082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.387113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.387389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.387431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.387568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.387602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.387739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.387770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.387959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.387991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.388189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.388234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.881 qpair failed and we were unable to recover it. 00:28:28.881 [2024-12-11 10:06:38.388376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.881 [2024-12-11 10:06:38.388407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.388595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.388627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.388818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.388850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.389028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.389060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.389242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.389276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.389415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.389446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.389644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.389675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.389916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.389948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.390192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.390231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.390367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.390399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.390528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.390559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.390775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.390807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.390936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.390977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.391093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.391124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.391327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.391360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.391537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.391568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.391756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.391788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.391915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.391947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.392153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.392184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.392463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.392499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.392714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.392746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.392862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.392894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.393071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.393111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.393333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.393366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.393538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.393569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.393749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.393781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.394026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.394058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.394363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.394396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.394517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.394549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.394672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.394702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.394893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.394924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.395098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.395129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.395377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.395409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.395677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.395709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.395881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.395912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.396096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.396127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.396410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.396443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.396576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.396607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.396785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.396816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.882 [2024-12-11 10:06:38.396990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.882 [2024-12-11 10:06:38.397022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.882 qpair failed and we were unable to recover it. 00:28:28.883 [2024-12-11 10:06:38.397238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.883 [2024-12-11 10:06:38.397271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.883 qpair failed and we were unable to recover it. 00:28:28.883 [2024-12-11 10:06:38.397546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.883 [2024-12-11 10:06:38.397577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.883 qpair failed and we were unable to recover it. 00:28:28.883 [2024-12-11 10:06:38.397764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.883 [2024-12-11 10:06:38.397796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.883 qpair failed and we were unable to recover it. 00:28:28.883 [2024-12-11 10:06:38.397969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.883 [2024-12-11 10:06:38.398000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.883 qpair failed and we were unable to recover it. 00:28:28.883 [2024-12-11 10:06:38.398197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.883 [2024-12-11 10:06:38.398238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.883 qpair failed and we were unable to recover it. 00:28:28.883 [2024-12-11 10:06:38.398384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.883 [2024-12-11 10:06:38.398416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.883 qpair failed and we were unable to recover it. 00:28:28.883 [2024-12-11 10:06:38.398593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.883 [2024-12-11 10:06:38.398625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.883 qpair failed and we were unable to recover it. 00:28:28.883 [2024-12-11 10:06:38.398759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.883 [2024-12-11 10:06:38.398791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.883 qpair failed and we were unable to recover it. 00:28:28.883 [2024-12-11 10:06:38.398916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.883 [2024-12-11 10:06:38.398949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:28.883 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.399139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.399189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.399431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.399485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.399684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.399718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.399850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.399882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.400066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.400098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.400363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.400399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.400657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.400690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.400817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.400849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.401037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.401071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.401347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.401380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.401626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.401658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.401847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.401879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.402162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.402194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.402398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.402439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.402705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.402743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.403024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.403056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.403320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.403353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.403606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.403638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.403924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.403956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.404146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.404177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.404453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.404487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.404755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.404794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.404921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.404953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.405139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.405171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.405374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.405407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.405603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.405641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.405898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.405930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.406073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.158 [2024-12-11 10:06:38.406105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.158 qpair failed and we were unable to recover it. 00:28:29.158 [2024-12-11 10:06:38.406211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.406256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.406382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.406413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.406596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.406628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.406745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.406777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.406905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.406937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.407079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.407111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.407353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.407387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.407523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.407555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.407743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.407776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.407949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.407980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.408177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.408209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.408360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.408392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.408658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.408712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.408905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.408939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.409134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.409165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.409459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.409493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.409676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.409708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.409885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.409917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.410065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.159 [2024-12-11 10:06:38.410091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.159 [2024-12-11 10:06:38.410098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:29.159 [2024-12-11 10:06:38.410105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:29.159 [2024-12-11 10:06:38.410100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.410112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.159 [2024-12-11 10:06:38.410131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.410248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.410287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.410477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.410512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.410707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.410738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.410947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.410979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.411159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.411190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.411463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.411496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.411614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.411654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.411576] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:28:29.159 [2024-12-11 10:06:38.411682] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:29.159 [2024-12-11 10:06:38.411595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:28:29.159 [2024-12-11 10:06:38.411682] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:28:29.159 [2024-12-11 10:06:38.411893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.411926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.412065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.412096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.412377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.412409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.412553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.412585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.412703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.412733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.413007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.413038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.413306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.413345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.413551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.413584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.413703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.413735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.413996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.159 [2024-12-11 10:06:38.414027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.159 qpair failed and we were unable to recover it. 00:28:29.159 [2024-12-11 10:06:38.414227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.414261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.414499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.414531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.414652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.414683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.414946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.414978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.415244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.415280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.415474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.415505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.415686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.415717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.415897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.415928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.416193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.416233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.416379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.416411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.416664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.416696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.416968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.417000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.417122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.417155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.417347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.417380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.417583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.417616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.417800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.417833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.418012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.418044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.418226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.418260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.418444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.418476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.418695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.418727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.418856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.418888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.419151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.419183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.419437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.419492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb638000b90 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.419690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.419753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.420021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.420093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.420312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.420349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.420472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.420504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.420653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.420697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.420924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.420956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.421235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.421268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.421398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.421431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.421725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.421750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.421924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.421946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.422167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.422190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.422390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.422413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.422626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.422650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.422820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.422844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.423122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.423146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.423415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.160 [2024-12-11 10:06:38.423439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.160 qpair failed and we were unable to recover it. 00:28:29.160 [2024-12-11 10:06:38.423725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.423747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.423978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.424009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.424179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.424200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.424360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.424384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.424488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.424511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.424618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.424640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.424822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.424845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.425109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.425132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.425334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.425358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.425602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.425625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.425833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.425856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.426075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.426098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.426332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.426356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.426604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.426627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.426857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.426880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.427118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.427141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.427302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.427325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.427572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.427595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.427698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.427722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.428013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.428035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.428200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.428231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.428400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.428423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.428645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.428669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.428935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.428958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.429063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.429086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.429252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.429275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.429495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.429517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.429759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.429782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.429954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.429977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.430222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.430246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.430345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.430368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.430544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.430567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.430724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.430746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.430973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.430997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.431167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.431190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.431427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.431451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.431572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.431594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.431765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.431788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.432006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.161 [2024-12-11 10:06:38.432029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.161 qpair failed and we were unable to recover it. 00:28:29.161 [2024-12-11 10:06:38.432209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.432239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.432426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.432449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.432693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.432716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.432872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.432895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.433062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.433086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.433328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.433352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.433526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.433550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.433665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.433688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.433953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.433977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.434244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.434269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.434443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.434466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.434684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.434708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.434879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.434904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.435149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.435174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.435381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.435405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.435626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.435650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.435916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.435942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.436164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.436188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.436306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.436331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.436498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.436522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.436675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.436698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.436867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.436890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.437119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.437143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.437330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.437356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.437509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.437534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.437711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.437736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.437908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.437932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.438131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.438155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.438402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.438427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.438596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.438624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.438873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.438898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.439007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.439032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.439253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.439280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.439452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.162 [2024-12-11 10:06:38.439478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.162 qpair failed and we were unable to recover it. 00:28:29.162 [2024-12-11 10:06:38.439724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.439749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.440011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.440034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.440186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.440210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.440394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.440419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.440626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.440652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.440891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.440916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.441110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.441135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.441310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.441335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.441551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.441576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.441751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.441797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.441969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.442003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.442277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.442313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.442566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.442600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.442888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.442921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.443190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.443231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.443417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.443450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.443711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.443745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.443920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.443954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.444076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.444110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.444351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.444385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.444609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.444643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.444915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.444948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.445239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.445274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.445444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.445476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.445724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.445757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.445944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.445975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.446155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.446186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.446342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.446375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.446506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.446537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.446654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.446686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.446883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.446915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.447035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.447067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.447330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.447363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.447560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.447593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.447873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.447905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.448164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.448202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.448474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.448506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.448632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.448663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.448901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.163 [2024-12-11 10:06:38.448932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.163 qpair failed and we were unable to recover it. 00:28:29.163 [2024-12-11 10:06:38.449126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.449157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.449405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.449437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.449630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.449661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.449792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.449823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.449933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.449965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.450140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.450171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.450314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.450346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.450465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.450496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.450695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.450725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.450848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.450879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.451068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.451098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.451282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.451317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.451523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.451555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.451792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.451824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.451957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.451990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.452235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.452269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.452446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.452478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.452731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.452763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.452909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.452941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.453123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.453155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.453331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.453365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.453633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.453665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.453783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.453814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.454005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.454037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.454171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.454203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.454403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.454435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.454626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.454658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.454839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.454872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.455050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.455081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.455198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.455256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.455522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.455555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.455803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.455836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.456011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.456042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.456208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.456255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.456536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.456568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.456806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.456838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.456973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.457012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.457137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.457168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.457355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.164 [2024-12-11 10:06:38.457388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.164 qpair failed and we were unable to recover it. 00:28:29.164 [2024-12-11 10:06:38.457578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.457612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.457734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.457766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.457951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.457983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.458173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.458206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.458397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.458430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.458540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.458572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.458832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.458864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.459038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.459070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.459246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.459281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.459578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.459610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.459780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.459812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.460093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.460125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.460321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.460352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.460540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.460572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.460688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.460719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.460852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.460883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.461056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.461087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.461323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.461356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.461553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.461584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.461753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.461784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.461973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.462005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.462188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.462227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.462361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.462393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.462565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.462596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.462837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.462869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.463075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.463107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.463348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.463382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.463508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.463540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.463807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.463839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.464051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.464083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.464284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.464317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.464508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.464540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.464807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.464837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.465027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.465059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.465270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.465302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.465422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.465453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.465642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.465673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.465805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.465843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.465978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.165 [2024-12-11 10:06:38.466010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.165 qpair failed and we were unable to recover it. 00:28:29.165 [2024-12-11 10:06:38.466204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.466245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.466426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.466457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.466568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.466599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.466775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.466806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.467064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.467096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.467283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.467316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.467497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.467529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.467748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.467780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.467966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.467997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.468243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.468276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.468413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.468445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.468639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.468670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.468810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.468842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.469027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.469058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.469235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.469267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.469457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.469488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.469676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.469707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.469906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.469937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.470103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.470134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.470309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.470341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.470602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.470634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.470870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.470902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.471175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.471206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.471407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.471440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.471566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.471597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.471801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.471833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.472071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.472102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.472363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.472395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.472591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.472623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.472757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.472787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.473043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.473074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.473237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.473269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.473441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.473472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.473646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.473677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.473865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.473896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.474108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.474139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.474376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.474408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.474625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.474656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.474775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.166 [2024-12-11 10:06:38.474811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.166 qpair failed and we were unable to recover it. 00:28:29.166 [2024-12-11 10:06:38.474996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.475026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.475270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.475304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.475495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.475526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.475698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.475729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.475962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.475994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.476240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.476273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.476397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.476428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.476674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.476706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.476942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.476973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.477096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.477128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.477304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.477337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.477546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.477576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.477834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.477866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.478092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.478123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.478247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.478278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.478442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.478474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.478593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.478624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.478886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.478917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.479159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.479190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.479485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.479517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.479638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.479669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.479865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.479897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.480185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.480227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.480467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.480498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.480734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.480765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.480903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.480933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.481103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.481135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.481395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.481428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.481671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.481702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.481940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.481971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.482153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.482184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.482436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.482468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.482669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.482700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.482880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.482911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.167 [2024-12-11 10:06:38.483179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.167 [2024-12-11 10:06:38.483210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.167 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.483332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.483362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.483543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.483574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.483706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.483737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.483932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.483963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.484100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.484136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.484310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.484342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.484524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.484555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.484753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.484784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.485025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.485055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.485179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.485210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.485339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.485370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.485580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.485611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.485787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.485818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.486076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.486107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.486230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.486264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.486441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.486472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.486643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.486675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.486949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.486981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.487121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.487152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.487414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.487447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.487641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.487672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.487851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.487881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.488134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.488166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.488311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.488343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.488604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.488635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.488765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.488795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.488910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.488942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.489046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.489076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.489343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.489376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.489586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.489617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.489747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.489777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.489974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.490006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.490135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.490165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.490389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.490421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.490599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.490630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.490754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.490784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.491007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.491038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.491235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.491268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.168 [2024-12-11 10:06:38.491377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.168 [2024-12-11 10:06:38.491408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.168 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.491604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.491635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.491824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.491855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.492110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.492140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.492326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.492359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.492614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.492645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.492768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.492804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.492914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.492945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.493182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.493213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.493350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.493381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.493491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.493522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.493704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.493736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.493991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.494021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.494261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.494293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.494468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.494499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.494671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.494702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.494894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.494925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.495054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.495085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.495331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.495363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.495468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.495498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.495687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.495718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.495893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.495924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.496055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.496085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.496188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.496229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.496421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.496452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.496715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.496745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.496886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.496917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.497183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.497213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.497429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.497460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.497635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.497667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.497878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.497909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.498189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.498230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.498430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.498461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.498577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.498608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.498787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.498817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.499084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.499115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.499312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.499344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.499524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.499555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.499686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.499717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.169 [2024-12-11 10:06:38.499895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.169 [2024-12-11 10:06:38.499926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.169 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.500115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.500146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.500328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.500360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.500673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.500706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.500846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.500876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.501001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.501032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.501232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.501265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.501421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.501457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.501650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.501681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.501933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.501964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.502153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.502184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.502379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.502411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.502603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.502636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.502818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.502849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.503038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.503068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.503212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.503258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.503449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.503481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.503744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.503777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.503904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.503935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.504076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.504109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.504250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.504284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.504558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.504590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.504851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.504882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.505147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.505179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.505369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.505403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.505597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.505629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.505903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.505936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.506152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.506185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.506310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.506341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.506574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.506605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.506778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.506811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.507019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.507050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.507246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.507278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.507474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.507506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.507759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.507819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.508023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.508057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.508279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.508313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.508428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.508461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.508721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.508753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.508899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.170 [2024-12-11 10:06:38.508931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.170 qpair failed and we were unable to recover it. 00:28:29.170 [2024-12-11 10:06:38.509103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.509135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.509320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.509354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.509534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.509567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.509751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.509783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.510048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.510081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.510229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.510263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.510391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.510424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.510608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.510650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.510826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.510859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.510975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.511008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.511214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.511258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.511442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.511473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.511648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.511680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.511783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.511816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.511926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.511959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.512079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.512112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.512245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.512280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.512476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.512509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.512791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.512823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.513062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.513095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.513340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.513374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.513607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.513640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.513824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.513857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.514049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.514082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.514295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.514328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.514445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.514477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.514603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.514636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.514747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.514780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.514954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.514985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.515103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.515136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.515256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.515289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:29.171 [2024-12-11 10:06:38.515556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.515591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 [2024-12-11 10:06:38.515729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.515763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.171 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:29.171 [2024-12-11 10:06:38.515974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.171 [2024-12-11 10:06:38.516014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.171 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.516130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.516162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:29.172 [2024-12-11 10:06:38.516308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.516342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.516518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.516554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:29.172 [2024-12-11 10:06:38.516746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.516779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.172 [2024-12-11 10:06:38.516967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.517001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.517108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.517141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.517335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.517369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.517504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.517537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.517693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.517726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.517850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.517882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.518010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.518043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.518244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.518284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.518468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.518501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.518699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.518731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.518912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.518945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.519057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.519093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.519283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.519317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.519442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.519475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.519658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.519692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.519803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.519836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.520029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.520062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.520177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.520209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.520337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.520369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.520499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.520532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.520650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.520682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.520815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.520848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.520967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.520999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.521136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.521168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.521309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.521343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.521532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.521564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.521684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.521717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.521841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.521875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.522054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.522086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.522276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.522309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.522446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.522479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.522652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.522684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.522885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.172 [2024-12-11 10:06:38.522919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.172 qpair failed and we were unable to recover it. 00:28:29.172 [2024-12-11 10:06:38.523114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.523147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.523333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.523368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.523551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.523583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.523857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.523891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.523995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.524028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.524228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.524261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.524385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.524427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.524615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.524649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.524869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.524902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.525014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.525046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.525248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.525281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.525492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.525524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.525655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.525687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.525817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.525854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.526095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.526133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.526275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.526311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.526488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.526521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.526652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.526685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.526811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.526846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.527038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.527070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.527186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.527229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.527366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.527399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.527506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.527537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.527730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.527763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.527969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.528002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.528125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.528157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.528359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.528392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.528504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.528537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.528674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.528706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.528818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.528850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.528971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.529004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.529117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.529149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.529289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.529322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.529496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.529529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.529697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.529729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.529906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.529939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.530078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.530112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.530291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.530325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.530532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.530565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.173 qpair failed and we were unable to recover it. 00:28:29.173 [2024-12-11 10:06:38.530756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.173 [2024-12-11 10:06:38.530789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.530895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.530926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.531064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.531124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.531324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.531362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.531493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.531525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.531664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.531696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.531886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.531919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.532061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.532092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.532211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.532257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.532442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.532476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.532654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.532687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.532807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.532838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.532952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.532984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.533175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.533209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.533330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.533362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.533495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.533529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.533654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.533687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.533846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.533877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.533988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.534022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.534209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.534252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.534452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.534485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.534670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.534702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.534810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.534843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.535026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.535057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.535175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.535210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.535339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.535371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.535490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.535522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.535700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.535731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.535843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.535873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.536009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.536046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.536158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.536189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.536384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.536423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.536539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.536571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.536768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.536800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.536988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.537019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.537130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.537163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.537288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.537321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.537440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.537472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.537664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.537696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.537820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.174 [2024-12-11 10:06:38.537852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.174 qpair failed and we were unable to recover it. 00:28:29.174 [2024-12-11 10:06:38.537957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.537988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.538118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.538149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.538285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.538319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.538453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.538485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.538662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.538693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.538877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.538908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.539018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.539050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.539164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.539196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.539322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.539355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.539474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.539506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.539625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.539658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.539788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.539820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.539935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.539968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.540151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.540183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.540302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.540338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.540458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.540489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.540637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.540669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.540781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.540812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.540933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.540966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.541097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.541130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.541238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.541270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.541378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.541410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.541536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.541566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.541672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.541703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.541810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.541841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.542026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.542058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.542179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.542210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.542339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.542370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.542486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.542519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.542636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.542678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.542790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.542822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.542936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.542968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.543071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.543103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.543203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.543251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.543362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.543394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.543504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.543535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.543710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.175 [2024-12-11 10:06:38.543742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.175 qpair failed and we were unable to recover it. 00:28:29.175 [2024-12-11 10:06:38.543880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.543912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.544021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.544054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.544234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.544270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.544396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.544428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.544565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.544598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.544709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.544740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.544864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.544896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.545000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.545032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.545144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.545174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.545312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.545349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.545454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.545485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.545605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.545635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.545747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.545778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.545896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.545928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.546111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.546143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.546257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.546291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.546410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.546442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.546687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.546719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.546905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.546937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.547047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.547078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.547191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.547240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.547357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.547388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.547510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.547542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.176 [2024-12-11 10:06:38.547647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.547679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.547804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.547835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.547945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.547979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.548090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.548122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.548237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.548270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.176 [2024-12-11 10:06:38.548378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.548410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.548514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.548546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.176 [2024-12-11 10:06:38.548726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.548764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.548959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.548991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.549109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.549143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.549256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.549288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.176 [2024-12-11 10:06:38.549403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.176 [2024-12-11 10:06:38.549435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.176 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.549636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.549668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.549788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.549820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.549931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.549964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.550080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.550112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.550288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.550322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.550447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.550479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.550665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.550697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.550812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.550844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.550947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.550978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.551082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.551114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.551234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.551267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.551444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.551476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.551578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.551610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.551739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.551771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.551869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.551901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.552114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.552145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.552324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.552356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.552467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.552499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.552605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.552636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.552828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.552860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.552979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.553010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.553123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.553151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.553276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.553307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.553486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.553515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.553684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.553712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.553840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.553869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.553970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.553999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.554095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.554124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.554240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.554270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.554385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.554413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.554503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.554530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.554627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.554655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.554772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.554800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.554906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.554935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.555118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.555147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.555258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.555294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.555463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.555493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.555611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.555639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.555741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.177 [2024-12-11 10:06:38.555781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.177 qpair failed and we were unable to recover it. 00:28:29.177 [2024-12-11 10:06:38.555893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.555921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.556029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.556057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.556163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.556192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.556372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.556402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.556498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.556527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.556698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.556726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.556827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.556857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.556963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.556992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.557100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.557128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.557234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.557264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.557451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.557480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.557574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.557602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.557773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.557801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.557914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.557943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.558045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.558074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.558170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.558199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb62c000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.558340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.558391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.558582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.558616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.558735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.558768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.558888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.558922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.559100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.559132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.559305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.559339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.559448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.559481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.559613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.559646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.559756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.559788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.559894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.559927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.560038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.560070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.560202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.560245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.560371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.560404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.560571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.560603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.560713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.560745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.560936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.560968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.561095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.561127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.561255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.561289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.561400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.561432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.561678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.561710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.561883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.561921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.562097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.562129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.562308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.178 [2024-12-11 10:06:38.562342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.178 qpair failed and we were unable to recover it. 00:28:29.178 [2024-12-11 10:06:38.562452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.562485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.562617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.562648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.562767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.562799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.562988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.563021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.563236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.563269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.563379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.563411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.563605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.563638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.563758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.563790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.563922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.563955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.564066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.564099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.564290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.564324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.564447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.564480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.564593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.564626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.564747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.564778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.564952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.564984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.565161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.565194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.565382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.565414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.565529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.565561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.565750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.565783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.565903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.565936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.566052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.566084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.566191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.566235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.566423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.566455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.566570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.566602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb630000b90 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.566723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.566765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.566956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.566989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.567107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.567139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.567243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.567276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.567472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.567504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.567621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.567653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.567776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.567808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.567915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.567948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.568130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.568162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.568289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.568321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.568435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.568467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.568584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.568616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.568736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.568768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.569031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.569064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.569253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.179 [2024-12-11 10:06:38.569288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.179 qpair failed and we were unable to recover it. 00:28:29.179 [2024-12-11 10:06:38.569426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.569457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.569634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.569667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.569810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.569842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.569963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.569994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.570105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.570138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.570271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.570305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.570421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.570453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.570644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.570676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.570802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.570834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.571037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.571069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.571177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.571210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.571349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.571382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.571504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.571542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.571678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.571710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.571828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.571860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.571968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.572001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.572113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.572145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.572272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.572305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.572426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.572458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.572641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.572673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.572852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.572883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.572998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.573030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.573160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.573192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.573413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.573445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.573704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.573736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.573928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.573961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.574098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 Malloc0 00:28:29.180 [2024-12-11 10:06:38.574130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.574327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.574359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.574470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.574501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.574620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.574651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.574783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.574814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.180 [2024-12-11 10:06:38.574946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.574979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.575159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.575191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:29.180 [2024-12-11 10:06:38.575380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.575412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.180 [2024-12-11 10:06:38.575533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.180 [2024-12-11 10:06:38.575564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.180 qpair failed and we were unable to recover it. 00:28:29.181 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.181 [2024-12-11 10:06:38.575751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.575783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.181 [2024-12-11 10:06:38.575961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.575993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.576107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.576139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.576333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.576366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.576482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.576514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.576691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.576721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.576907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.576939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.577113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.577144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.577278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.577310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.577597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.577629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.577746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.577778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.577908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.577940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.578062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.578093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.578199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.578241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.578363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.578394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.578599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.578630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.578740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.578776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.578891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.578922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.579113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.579144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.579245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.579277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.579402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.579434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.579612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.579642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.579856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.579887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.579995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.580027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.580147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.580178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.580308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.580340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.580519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.580550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.580748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.580779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.580951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.580983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.581106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.581138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.581280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.581313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.581434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.581465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.581646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.581678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.581720] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.181 [2024-12-11 10:06:38.581856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.581887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.582006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.582038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.582209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.582251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.582361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.582393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.582656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.181 [2024-12-11 10:06:38.582688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.181 qpair failed and we were unable to recover it. 00:28:29.181 [2024-12-11 10:06:38.582861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.582892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.583031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.583063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.583257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.583289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.583462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.583493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.583608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.583640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.583831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.583874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.584000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.584031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.584173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.584205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.584324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.584356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.584488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.584519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.584707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.584738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.584865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.584897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.585000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.585031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.585256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.585290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.585413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.585445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.585577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.585608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.585727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.585758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.585876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.585908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.586010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.586042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.586186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.586227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.586437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.586469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.586574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.586606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.586803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.586834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.587022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.587053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.182 [2024-12-11 10:06:38.587249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.587283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.587405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.587436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.587571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.587602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:29.182 [2024-12-11 10:06:38.587791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.587823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.587948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.182 [2024-12-11 10:06:38.587979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.588176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.588209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.182 [2024-12-11 10:06:38.588397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.588430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.588559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.588590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.588711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.588743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.588922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.588954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.589064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.589095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.589208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.589275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.589388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.589420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.589526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.182 [2024-12-11 10:06:38.589556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.182 qpair failed and we were unable to recover it. 00:28:29.182 [2024-12-11 10:06:38.589680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.589712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.589956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.589989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.590187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.590229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.590496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.590528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.590647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.590677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.590889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.590921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.591056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.591089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.591269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.591303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.591433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.591465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.591583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.591615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.591722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.591754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.591928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.591959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.592195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.592235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.592363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.592395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.592515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.592547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.592664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.592695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.592828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.592858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.593030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.593061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.593265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.593299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.593506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.593537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.593754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.593787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.593913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.593945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.594120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.594151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.594397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.594430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.594640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.594672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.594789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.594821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.595011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.595044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.595172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.595207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.183 [2024-12-11 10:06:38.595334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.595367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.595556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.595588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:29.183 [2024-12-11 10:06:38.595774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.595805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.595932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.595964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.596101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.596132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.183 [2024-12-11 10:06:38.596354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.596388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.596602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.596633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.596756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.596788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.596903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.596934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.183 [2024-12-11 10:06:38.597049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.183 [2024-12-11 10:06:38.597080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.183 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.597272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.597304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.597489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.597521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.597633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.597666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.597903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.597935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.598110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.598141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.598269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.598301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.598454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.598485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.598651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.598683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.598797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.598829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.598999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.599031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.599150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.599181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.599376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.599408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.599512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.599544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.599664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.599695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.599871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.599903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.600027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.600058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.600257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.600290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.600400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.600432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.600600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.600630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.600825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.600857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.601053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.601084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.601203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.601259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.601399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.601430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.601603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.601634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.601754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.601786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.601963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.601994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.602260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.602292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.602486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.602518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.602645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.602677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.602859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.602890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.602992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.603024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.603158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.603192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.184 [2024-12-11 10:06:38.603390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 [2024-12-11 10:06:38.603421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.603664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.184 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:29.184 [2024-12-11 10:06:38.603697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.184 qpair failed and we were unable to recover it. 00:28:29.184 [2024-12-11 10:06:38.603806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.185 [2024-12-11 10:06:38.603837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.603948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.185 [2024-12-11 10:06:38.603980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.185 [2024-12-11 10:06:38.604149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.185 [2024-12-11 10:06:38.604181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.185 [2024-12-11 10:06:38.604337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.185 [2024-12-11 10:06:38.604371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.604473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.185 [2024-12-11 10:06:38.604504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.604639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.185 [2024-12-11 10:06:38.604671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.604787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.185 [2024-12-11 10:06:38.604818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.604921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.185 [2024-12-11 10:06:38.604953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.605061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.185 [2024-12-11 10:06:38.605092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.605229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.185 [2024-12-11 10:06:38.605262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.605438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.185 [2024-12-11 10:06:38.605469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.605651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.185 [2024-12-11 10:06:38.605682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.605803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.185 [2024-12-11 10:06:38.605834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.606094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.185 [2024-12-11 10:06:38.606125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.606237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.185 [2024-12-11 10:06:38.606270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.606503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.185 [2024-12-11 10:06:38.606534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe4500 with addr=10.0.0.2, port=4420 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.606804] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.185 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.185 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:29.185 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.185 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.185 [2024-12-11 10:06:38.612390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.185 [2024-12-11 10:06:38.612502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.185 [2024-12-11 10:06:38.612549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.185 [2024-12-11 10:06:38.612574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.185 [2024-12-11 10:06:38.612597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.185 [2024-12-11 10:06:38.612649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.185 10:06:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 250318 00:28:29.185 [2024-12-11 10:06:38.622298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.185 [2024-12-11 10:06:38.622386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.185 [2024-12-11 10:06:38.622417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.185 [2024-12-11 10:06:38.622434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.185 [2024-12-11 10:06:38.622450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.185 [2024-12-11 10:06:38.622490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.632267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.185 [2024-12-11 10:06:38.632378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.185 [2024-12-11 10:06:38.632401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.185 [2024-12-11 10:06:38.632413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.185 [2024-12-11 10:06:38.632423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.185 [2024-12-11 10:06:38.632449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.642252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.185 [2024-12-11 10:06:38.642313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.185 [2024-12-11 10:06:38.642329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.185 [2024-12-11 10:06:38.642337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.185 [2024-12-11 10:06:38.642343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.185 [2024-12-11 10:06:38.642358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.652270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.185 [2024-12-11 10:06:38.652329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.185 [2024-12-11 10:06:38.652342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.185 [2024-12-11 10:06:38.652349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.185 [2024-12-11 10:06:38.652355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.185 [2024-12-11 10:06:38.652369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.662243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.185 [2024-12-11 10:06:38.662302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.185 [2024-12-11 10:06:38.662321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.185 [2024-12-11 10:06:38.662328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.185 [2024-12-11 10:06:38.662335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.185 [2024-12-11 10:06:38.662349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.185 qpair failed and we were unable to recover it. 00:28:29.185 [2024-12-11 10:06:38.672291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.186 [2024-12-11 10:06:38.672379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.186 [2024-12-11 10:06:38.672396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.186 [2024-12-11 10:06:38.672403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.186 [2024-12-11 10:06:38.672409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.186 [2024-12-11 10:06:38.672422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.186 qpair failed and we were unable to recover it. 00:28:29.186 [2024-12-11 10:06:38.682400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.186 [2024-12-11 10:06:38.682456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.186 [2024-12-11 10:06:38.682469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.186 [2024-12-11 10:06:38.682475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.186 [2024-12-11 10:06:38.682482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.186 [2024-12-11 10:06:38.682496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.186 qpair failed and we were unable to recover it. 00:28:29.186 [2024-12-11 10:06:38.692414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.186 [2024-12-11 10:06:38.692485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.186 [2024-12-11 10:06:38.692498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.186 [2024-12-11 10:06:38.692505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.186 [2024-12-11 10:06:38.692511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.186 [2024-12-11 10:06:38.692526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.186 qpair failed and we were unable to recover it. 00:28:29.186 [2024-12-11 10:06:38.702426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.186 [2024-12-11 10:06:38.702475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.186 [2024-12-11 10:06:38.702488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.186 [2024-12-11 10:06:38.702494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.186 [2024-12-11 10:06:38.702500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.186 [2024-12-11 10:06:38.702513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.186 qpair failed and we were unable to recover it. 00:28:29.186 [2024-12-11 10:06:38.712388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.186 [2024-12-11 10:06:38.712443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.186 [2024-12-11 10:06:38.712459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.186 [2024-12-11 10:06:38.712466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.186 [2024-12-11 10:06:38.712472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.186 [2024-12-11 10:06:38.712490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.186 qpair failed and we were unable to recover it. 00:28:29.446 [2024-12-11 10:06:38.722516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.446 [2024-12-11 10:06:38.722621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.446 [2024-12-11 10:06:38.722638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.446 [2024-12-11 10:06:38.722646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.446 [2024-12-11 10:06:38.722652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.446 [2024-12-11 10:06:38.722668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.446 qpair failed and we were unable to recover it. 00:28:29.446 [2024-12-11 10:06:38.732515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.446 [2024-12-11 10:06:38.732571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.446 [2024-12-11 10:06:38.732585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.446 [2024-12-11 10:06:38.732592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.446 [2024-12-11 10:06:38.732598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.446 [2024-12-11 10:06:38.732613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.446 qpair failed and we were unable to recover it. 00:28:29.446 [2024-12-11 10:06:38.742559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.446 [2024-12-11 10:06:38.742614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.446 [2024-12-11 10:06:38.742627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.446 [2024-12-11 10:06:38.742634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.446 [2024-12-11 10:06:38.742640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.446 [2024-12-11 10:06:38.742654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.446 qpair failed and we were unable to recover it. 00:28:29.446 [2024-12-11 10:06:38.752498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.446 [2024-12-11 10:06:38.752553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.446 [2024-12-11 10:06:38.752568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.446 [2024-12-11 10:06:38.752575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.446 [2024-12-11 10:06:38.752581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.446 [2024-12-11 10:06:38.752595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.446 qpair failed and we were unable to recover it. 00:28:29.446 [2024-12-11 10:06:38.762532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.446 [2024-12-11 10:06:38.762586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.446 [2024-12-11 10:06:38.762602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.447 [2024-12-11 10:06:38.762610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.447 [2024-12-11 10:06:38.762616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.447 [2024-12-11 10:06:38.762631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.447 qpair failed and we were unable to recover it. 00:28:29.447 [2024-12-11 10:06:38.772660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.447 [2024-12-11 10:06:38.772715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.447 [2024-12-11 10:06:38.772730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.447 [2024-12-11 10:06:38.772737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.447 [2024-12-11 10:06:38.772743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.447 [2024-12-11 10:06:38.772757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.447 qpair failed and we were unable to recover it. 00:28:29.447 [2024-12-11 10:06:38.782550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.447 [2024-12-11 10:06:38.782599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.447 [2024-12-11 10:06:38.782613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.447 [2024-12-11 10:06:38.782619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.447 [2024-12-11 10:06:38.782625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.447 [2024-12-11 10:06:38.782640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.447 qpair failed and we were unable to recover it. 00:28:29.447 [2024-12-11 10:06:38.792598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.447 [2024-12-11 10:06:38.792669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.447 [2024-12-11 10:06:38.792683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.447 [2024-12-11 10:06:38.792690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.447 [2024-12-11 10:06:38.792696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.447 [2024-12-11 10:06:38.792710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.447 qpair failed and we were unable to recover it. 00:28:29.447 [2024-12-11 10:06:38.802703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.447 [2024-12-11 10:06:38.802757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.447 [2024-12-11 10:06:38.802773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.447 [2024-12-11 10:06:38.802780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.447 [2024-12-11 10:06:38.802786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.447 [2024-12-11 10:06:38.802800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.447 qpair failed and we were unable to recover it. 00:28:29.447 [2024-12-11 10:06:38.812731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.447 [2024-12-11 10:06:38.812784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.447 [2024-12-11 10:06:38.812798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.447 [2024-12-11 10:06:38.812804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.447 [2024-12-11 10:06:38.812811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.447 [2024-12-11 10:06:38.812824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.447 qpair failed and we were unable to recover it. 00:28:29.447 [2024-12-11 10:06:38.822673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.447 [2024-12-11 10:06:38.822724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.447 [2024-12-11 10:06:38.822737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.447 [2024-12-11 10:06:38.822743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.447 [2024-12-11 10:06:38.822749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.447 [2024-12-11 10:06:38.822764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.447 qpair failed and we were unable to recover it. 00:28:29.447 [2024-12-11 10:06:38.832709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.447 [2024-12-11 10:06:38.832763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.447 [2024-12-11 10:06:38.832778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.447 [2024-12-11 10:06:38.832784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.447 [2024-12-11 10:06:38.832791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.447 [2024-12-11 10:06:38.832804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.447 qpair failed and we were unable to recover it. 00:28:29.447 [2024-12-11 10:06:38.842741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.447 [2024-12-11 10:06:38.842796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.447 [2024-12-11 10:06:38.842810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.447 [2024-12-11 10:06:38.842817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.447 [2024-12-11 10:06:38.842823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.447 [2024-12-11 10:06:38.842840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.447 qpair failed and we were unable to recover it. 00:28:29.447 [2024-12-11 10:06:38.852816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.447 [2024-12-11 10:06:38.852871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.447 [2024-12-11 10:06:38.852884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.447 [2024-12-11 10:06:38.852891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.447 [2024-12-11 10:06:38.852897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.447 [2024-12-11 10:06:38.852911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.447 qpair failed and we were unable to recover it. 00:28:29.447 [2024-12-11 10:06:38.862867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.447 [2024-12-11 10:06:38.862920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.447 [2024-12-11 10:06:38.862933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.447 [2024-12-11 10:06:38.862940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.447 [2024-12-11 10:06:38.862946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.447 [2024-12-11 10:06:38.862961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.447 qpair failed and we were unable to recover it. 00:28:29.447 [2024-12-11 10:06:38.872843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.447 [2024-12-11 10:06:38.872898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.447 [2024-12-11 10:06:38.872912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.447 [2024-12-11 10:06:38.872919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.447 [2024-12-11 10:06:38.872925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.447 [2024-12-11 10:06:38.872939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.447 qpair failed and we were unable to recover it. 00:28:29.447 [2024-12-11 10:06:38.882912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.447 [2024-12-11 10:06:38.882970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.447 [2024-12-11 10:06:38.882984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.448 [2024-12-11 10:06:38.882990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.448 [2024-12-11 10:06:38.882997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.448 [2024-12-11 10:06:38.883010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.448 qpair failed and we were unable to recover it. 00:28:29.448 [2024-12-11 10:06:38.892943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.448 [2024-12-11 10:06:38.893002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.448 [2024-12-11 10:06:38.893015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.448 [2024-12-11 10:06:38.893022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.448 [2024-12-11 10:06:38.893028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.448 [2024-12-11 10:06:38.893042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.448 qpair failed and we were unable to recover it. 00:28:29.448 [2024-12-11 10:06:38.902941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.448 [2024-12-11 10:06:38.902999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.448 [2024-12-11 10:06:38.903012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.448 [2024-12-11 10:06:38.903019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.448 [2024-12-11 10:06:38.903024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.448 [2024-12-11 10:06:38.903038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.448 qpair failed and we were unable to recover it. 00:28:29.448 [2024-12-11 10:06:38.912950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.448 [2024-12-11 10:06:38.913023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.448 [2024-12-11 10:06:38.913037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.448 [2024-12-11 10:06:38.913044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.448 [2024-12-11 10:06:38.913050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.448 [2024-12-11 10:06:38.913064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.448 qpair failed and we were unable to recover it. 00:28:29.448 [2024-12-11 10:06:38.922958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.448 [2024-12-11 10:06:38.923009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.448 [2024-12-11 10:06:38.923023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.448 [2024-12-11 10:06:38.923030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.448 [2024-12-11 10:06:38.923036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.448 [2024-12-11 10:06:38.923050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.448 qpair failed and we were unable to recover it. 00:28:29.448 [2024-12-11 10:06:38.933060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.448 [2024-12-11 10:06:38.933115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.448 [2024-12-11 10:06:38.933134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.448 [2024-12-11 10:06:38.933141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.448 [2024-12-11 10:06:38.933147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.448 [2024-12-11 10:06:38.933161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.448 qpair failed and we were unable to recover it. 00:28:29.448 [2024-12-11 10:06:38.943076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.448 [2024-12-11 10:06:38.943123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.448 [2024-12-11 10:06:38.943138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.448 [2024-12-11 10:06:38.943144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.448 [2024-12-11 10:06:38.943150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.448 [2024-12-11 10:06:38.943164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.448 qpair failed and we were unable to recover it. 00:28:29.448 [2024-12-11 10:06:38.953115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.448 [2024-12-11 10:06:38.953172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.448 [2024-12-11 10:06:38.953185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.448 [2024-12-11 10:06:38.953191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.448 [2024-12-11 10:06:38.953197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.448 [2024-12-11 10:06:38.953211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.448 qpair failed and we were unable to recover it. 00:28:29.448 [2024-12-11 10:06:38.963143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.448 [2024-12-11 10:06:38.963199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.448 [2024-12-11 10:06:38.963211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.448 [2024-12-11 10:06:38.963221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.448 [2024-12-11 10:06:38.963228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.448 [2024-12-11 10:06:38.963242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.448 qpair failed and we were unable to recover it. 00:28:29.448 [2024-12-11 10:06:38.973185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.448 [2024-12-11 10:06:38.973249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.448 [2024-12-11 10:06:38.973261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.448 [2024-12-11 10:06:38.973268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.448 [2024-12-11 10:06:38.973274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.448 [2024-12-11 10:06:38.973291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.448 qpair failed and we were unable to recover it. 00:28:29.448 [2024-12-11 10:06:38.983199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.448 [2024-12-11 10:06:38.983260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.448 [2024-12-11 10:06:38.983273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.448 [2024-12-11 10:06:38.983280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.448 [2024-12-11 10:06:38.983286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.448 [2024-12-11 10:06:38.983300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.448 qpair failed and we were unable to recover it. 00:28:29.448 [2024-12-11 10:06:38.993287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.448 [2024-12-11 10:06:38.993333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.448 [2024-12-11 10:06:38.993346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.448 [2024-12-11 10:06:38.993353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.448 [2024-12-11 10:06:38.993359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.448 [2024-12-11 10:06:38.993372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.448 qpair failed and we were unable to recover it. 00:28:29.448 [2024-12-11 10:06:39.003294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.448 [2024-12-11 10:06:39.003349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.448 [2024-12-11 10:06:39.003362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.448 [2024-12-11 10:06:39.003369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.448 [2024-12-11 10:06:39.003374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.448 [2024-12-11 10:06:39.003388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.448 qpair failed and we were unable to recover it. 00:28:29.448 [2024-12-11 10:06:39.013300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.448 [2024-12-11 10:06:39.013359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.448 [2024-12-11 10:06:39.013372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.448 [2024-12-11 10:06:39.013379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.448 [2024-12-11 10:06:39.013385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.448 [2024-12-11 10:06:39.013399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.449 qpair failed and we were unable to recover it. 00:28:29.709 [2024-12-11 10:06:39.023297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.709 [2024-12-11 10:06:39.023356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.709 [2024-12-11 10:06:39.023373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.709 [2024-12-11 10:06:39.023380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.709 [2024-12-11 10:06:39.023386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.709 [2024-12-11 10:06:39.023402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.709 qpair failed and we were unable to recover it. 00:28:29.709 [2024-12-11 10:06:39.033324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.709 [2024-12-11 10:06:39.033413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.709 [2024-12-11 10:06:39.033429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.709 [2024-12-11 10:06:39.033436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.709 [2024-12-11 10:06:39.033442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.709 [2024-12-11 10:06:39.033458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.709 qpair failed and we were unable to recover it. 00:28:29.709 [2024-12-11 10:06:39.043364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.709 [2024-12-11 10:06:39.043422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.709 [2024-12-11 10:06:39.043436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.709 [2024-12-11 10:06:39.043443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.709 [2024-12-11 10:06:39.043449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.709 [2024-12-11 10:06:39.043463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.709 qpair failed and we were unable to recover it. 00:28:29.709 [2024-12-11 10:06:39.053392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.709 [2024-12-11 10:06:39.053474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.709 [2024-12-11 10:06:39.053487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.709 [2024-12-11 10:06:39.053494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.709 [2024-12-11 10:06:39.053500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.709 [2024-12-11 10:06:39.053514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.709 qpair failed and we were unable to recover it. 00:28:29.709 [2024-12-11 10:06:39.063413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.709 [2024-12-11 10:06:39.063508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.709 [2024-12-11 10:06:39.063524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.709 [2024-12-11 10:06:39.063531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.709 [2024-12-11 10:06:39.063537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.709 [2024-12-11 10:06:39.063551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.709 qpair failed and we were unable to recover it. 00:28:29.709 [2024-12-11 10:06:39.073446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.709 [2024-12-11 10:06:39.073543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.709 [2024-12-11 10:06:39.073556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.709 [2024-12-11 10:06:39.073563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.709 [2024-12-11 10:06:39.073569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.709 [2024-12-11 10:06:39.073582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.709 qpair failed and we were unable to recover it. 00:28:29.709 [2024-12-11 10:06:39.083480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.709 [2024-12-11 10:06:39.083534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.709 [2024-12-11 10:06:39.083547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.709 [2024-12-11 10:06:39.083553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.709 [2024-12-11 10:06:39.083560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.709 [2024-12-11 10:06:39.083574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.709 qpair failed and we were unable to recover it. 00:28:29.709 [2024-12-11 10:06:39.093500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.709 [2024-12-11 10:06:39.093556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.709 [2024-12-11 10:06:39.093569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.709 [2024-12-11 10:06:39.093576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.709 [2024-12-11 10:06:39.093582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.709 [2024-12-11 10:06:39.093595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.709 qpair failed and we were unable to recover it. 00:28:29.709 [2024-12-11 10:06:39.103460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.709 [2024-12-11 10:06:39.103534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.709 [2024-12-11 10:06:39.103548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.709 [2024-12-11 10:06:39.103555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.709 [2024-12-11 10:06:39.103560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.709 [2024-12-11 10:06:39.103577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.709 qpair failed and we were unable to recover it. 00:28:29.709 [2024-12-11 10:06:39.113551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.709 [2024-12-11 10:06:39.113607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.709 [2024-12-11 10:06:39.113620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.709 [2024-12-11 10:06:39.113627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.709 [2024-12-11 10:06:39.113633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.709 [2024-12-11 10:06:39.113647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.709 qpair failed and we were unable to recover it. 00:28:29.709 [2024-12-11 10:06:39.123590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.709 [2024-12-11 10:06:39.123647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.709 [2024-12-11 10:06:39.123660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.709 [2024-12-11 10:06:39.123666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.709 [2024-12-11 10:06:39.123673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.709 [2024-12-11 10:06:39.123686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.709 qpair failed and we were unable to recover it. 00:28:29.709 [2024-12-11 10:06:39.133634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.709 [2024-12-11 10:06:39.133696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.709 [2024-12-11 10:06:39.133711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.709 [2024-12-11 10:06:39.133718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.709 [2024-12-11 10:06:39.133724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.709 [2024-12-11 10:06:39.133738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.709 qpair failed and we were unable to recover it. 00:28:29.709 [2024-12-11 10:06:39.143659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.710 [2024-12-11 10:06:39.143723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.710 [2024-12-11 10:06:39.143736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.710 [2024-12-11 10:06:39.143743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.710 [2024-12-11 10:06:39.143749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.710 [2024-12-11 10:06:39.143763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.710 qpair failed and we were unable to recover it. 00:28:29.710 [2024-12-11 10:06:39.153675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.710 [2024-12-11 10:06:39.153728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.710 [2024-12-11 10:06:39.153741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.710 [2024-12-11 10:06:39.153748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.710 [2024-12-11 10:06:39.153754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.710 [2024-12-11 10:06:39.153768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.710 qpair failed and we were unable to recover it. 00:28:29.710 [2024-12-11 10:06:39.163716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.710 [2024-12-11 10:06:39.163769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.710 [2024-12-11 10:06:39.163782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.710 [2024-12-11 10:06:39.163789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.710 [2024-12-11 10:06:39.163794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.710 [2024-12-11 10:06:39.163808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.710 qpair failed and we were unable to recover it. 00:28:29.710 [2024-12-11 10:06:39.173744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.710 [2024-12-11 10:06:39.173845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.710 [2024-12-11 10:06:39.173858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.710 [2024-12-11 10:06:39.173865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.710 [2024-12-11 10:06:39.173871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.710 [2024-12-11 10:06:39.173885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.710 qpair failed and we were unable to recover it. 00:28:29.710 [2024-12-11 10:06:39.183725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.710 [2024-12-11 10:06:39.183788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.710 [2024-12-11 10:06:39.183801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.710 [2024-12-11 10:06:39.183808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.710 [2024-12-11 10:06:39.183814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.710 [2024-12-11 10:06:39.183829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.710 qpair failed and we were unable to recover it. 00:28:29.710 [2024-12-11 10:06:39.193779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.710 [2024-12-11 10:06:39.193832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.710 [2024-12-11 10:06:39.193849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.710 [2024-12-11 10:06:39.193857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.710 [2024-12-11 10:06:39.193862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.710 [2024-12-11 10:06:39.193877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.710 qpair failed and we were unable to recover it. 00:28:29.710 [2024-12-11 10:06:39.203859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.710 [2024-12-11 10:06:39.203913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.710 [2024-12-11 10:06:39.203927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.710 [2024-12-11 10:06:39.203934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.710 [2024-12-11 10:06:39.203939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.710 [2024-12-11 10:06:39.203954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.710 qpair failed and we were unable to recover it. 00:28:29.710 [2024-12-11 10:06:39.213876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.710 [2024-12-11 10:06:39.213932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.710 [2024-12-11 10:06:39.213946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.710 [2024-12-11 10:06:39.213953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.710 [2024-12-11 10:06:39.213959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.710 [2024-12-11 10:06:39.213973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.710 qpair failed and we were unable to recover it. 00:28:29.710 [2024-12-11 10:06:39.223781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.710 [2024-12-11 10:06:39.223834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.710 [2024-12-11 10:06:39.223848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.710 [2024-12-11 10:06:39.223855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.710 [2024-12-11 10:06:39.223862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.710 [2024-12-11 10:06:39.223875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.710 qpair failed and we were unable to recover it. 00:28:29.710 [2024-12-11 10:06:39.233908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.710 [2024-12-11 10:06:39.233987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.710 [2024-12-11 10:06:39.234000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.710 [2024-12-11 10:06:39.234007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.710 [2024-12-11 10:06:39.234013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.710 [2024-12-11 10:06:39.234030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.710 qpair failed and we were unable to recover it. 00:28:29.710 [2024-12-11 10:06:39.243951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.710 [2024-12-11 10:06:39.244006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.710 [2024-12-11 10:06:39.244020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.710 [2024-12-11 10:06:39.244026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.710 [2024-12-11 10:06:39.244032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.710 [2024-12-11 10:06:39.244047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.710 qpair failed and we were unable to recover it. 00:28:29.710 [2024-12-11 10:06:39.253947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.710 [2024-12-11 10:06:39.254025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.710 [2024-12-11 10:06:39.254039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.710 [2024-12-11 10:06:39.254046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.710 [2024-12-11 10:06:39.254051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.710 [2024-12-11 10:06:39.254065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.710 qpair failed and we were unable to recover it. 00:28:29.710 [2024-12-11 10:06:39.263975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.710 [2024-12-11 10:06:39.264028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.710 [2024-12-11 10:06:39.264041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.711 [2024-12-11 10:06:39.264048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.711 [2024-12-11 10:06:39.264054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.711 [2024-12-11 10:06:39.264067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.711 qpair failed and we were unable to recover it. 00:28:29.711 [2024-12-11 10:06:39.273987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.711 [2024-12-11 10:06:39.274039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.711 [2024-12-11 10:06:39.274052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.711 [2024-12-11 10:06:39.274059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.711 [2024-12-11 10:06:39.274065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.711 [2024-12-11 10:06:39.274078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.711 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-11 10:06:39.284020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.971 [2024-12-11 10:06:39.284075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.971 [2024-12-11 10:06:39.284092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.971 [2024-12-11 10:06:39.284100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.971 [2024-12-11 10:06:39.284107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.971 [2024-12-11 10:06:39.284122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-11 10:06:39.294079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.971 [2024-12-11 10:06:39.294136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.971 [2024-12-11 10:06:39.294153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.971 [2024-12-11 10:06:39.294160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.971 [2024-12-11 10:06:39.294166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.971 [2024-12-11 10:06:39.294182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-11 10:06:39.304077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.971 [2024-12-11 10:06:39.304129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.971 [2024-12-11 10:06:39.304143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.971 [2024-12-11 10:06:39.304150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.971 [2024-12-11 10:06:39.304156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.971 [2024-12-11 10:06:39.304170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-11 10:06:39.314097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.971 [2024-12-11 10:06:39.314152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.971 [2024-12-11 10:06:39.314165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.971 [2024-12-11 10:06:39.314173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.971 [2024-12-11 10:06:39.314178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.971 [2024-12-11 10:06:39.314193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-11 10:06:39.324142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.971 [2024-12-11 10:06:39.324198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.971 [2024-12-11 10:06:39.324215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.971 [2024-12-11 10:06:39.324226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.971 [2024-12-11 10:06:39.324232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.971 [2024-12-11 10:06:39.324246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-11 10:06:39.334110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.971 [2024-12-11 10:06:39.334168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.971 [2024-12-11 10:06:39.334191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.971 [2024-12-11 10:06:39.334197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.971 [2024-12-11 10:06:39.334204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.971 [2024-12-11 10:06:39.334223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-11 10:06:39.344189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.971 [2024-12-11 10:06:39.344249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.971 [2024-12-11 10:06:39.344263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.971 [2024-12-11 10:06:39.344270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.971 [2024-12-11 10:06:39.344275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.971 [2024-12-11 10:06:39.344290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.971 qpair failed and we were unable to recover it. 00:28:29.971 [2024-12-11 10:06:39.354220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.971 [2024-12-11 10:06:39.354268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.971 [2024-12-11 10:06:39.354281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.972 [2024-12-11 10:06:39.354288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.972 [2024-12-11 10:06:39.354294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.972 [2024-12-11 10:06:39.354309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-11 10:06:39.364264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.972 [2024-12-11 10:06:39.364320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.972 [2024-12-11 10:06:39.364333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.972 [2024-12-11 10:06:39.364340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.972 [2024-12-11 10:06:39.364345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.972 [2024-12-11 10:06:39.364362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-11 10:06:39.374283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.972 [2024-12-11 10:06:39.374338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.972 [2024-12-11 10:06:39.374351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.972 [2024-12-11 10:06:39.374358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.972 [2024-12-11 10:06:39.374364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.972 [2024-12-11 10:06:39.374377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-11 10:06:39.384304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.972 [2024-12-11 10:06:39.384358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.972 [2024-12-11 10:06:39.384371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.972 [2024-12-11 10:06:39.384377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.972 [2024-12-11 10:06:39.384383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.972 [2024-12-11 10:06:39.384397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-11 10:06:39.394325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.972 [2024-12-11 10:06:39.394397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.972 [2024-12-11 10:06:39.394411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.972 [2024-12-11 10:06:39.394418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.972 [2024-12-11 10:06:39.394424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.972 [2024-12-11 10:06:39.394438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-11 10:06:39.404423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.972 [2024-12-11 10:06:39.404482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.972 [2024-12-11 10:06:39.404496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.972 [2024-12-11 10:06:39.404503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.972 [2024-12-11 10:06:39.404508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.972 [2024-12-11 10:06:39.404522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-11 10:06:39.414388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.972 [2024-12-11 10:06:39.414453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.972 [2024-12-11 10:06:39.414466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.972 [2024-12-11 10:06:39.414473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.972 [2024-12-11 10:06:39.414479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.972 [2024-12-11 10:06:39.414494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-11 10:06:39.424518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.972 [2024-12-11 10:06:39.424579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.972 [2024-12-11 10:06:39.424592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.972 [2024-12-11 10:06:39.424599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.972 [2024-12-11 10:06:39.424605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.972 [2024-12-11 10:06:39.424618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-11 10:06:39.434492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.972 [2024-12-11 10:06:39.434542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.972 [2024-12-11 10:06:39.434556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.972 [2024-12-11 10:06:39.434563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.972 [2024-12-11 10:06:39.434569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.972 [2024-12-11 10:06:39.434582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-11 10:06:39.444536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.972 [2024-12-11 10:06:39.444638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.972 [2024-12-11 10:06:39.444651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.972 [2024-12-11 10:06:39.444658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.972 [2024-12-11 10:06:39.444663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.972 [2024-12-11 10:06:39.444677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-11 10:06:39.454579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.972 [2024-12-11 10:06:39.454635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.972 [2024-12-11 10:06:39.454651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.972 [2024-12-11 10:06:39.454658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.972 [2024-12-11 10:06:39.454664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.972 [2024-12-11 10:06:39.454679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.972 qpair failed and we were unable to recover it. 00:28:29.972 [2024-12-11 10:06:39.464538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.972 [2024-12-11 10:06:39.464596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.972 [2024-12-11 10:06:39.464609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.972 [2024-12-11 10:06:39.464616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.972 [2024-12-11 10:06:39.464622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.972 [2024-12-11 10:06:39.464635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-11 10:06:39.474564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.973 [2024-12-11 10:06:39.474615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.973 [2024-12-11 10:06:39.474629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.973 [2024-12-11 10:06:39.474635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.973 [2024-12-11 10:06:39.474641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.973 [2024-12-11 10:06:39.474655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-11 10:06:39.484589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.973 [2024-12-11 10:06:39.484645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.973 [2024-12-11 10:06:39.484658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.973 [2024-12-11 10:06:39.484664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.973 [2024-12-11 10:06:39.484671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.973 [2024-12-11 10:06:39.484685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-11 10:06:39.494569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.973 [2024-12-11 10:06:39.494629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.973 [2024-12-11 10:06:39.494642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.973 [2024-12-11 10:06:39.494649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.973 [2024-12-11 10:06:39.494655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.973 [2024-12-11 10:06:39.494671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-11 10:06:39.504606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.973 [2024-12-11 10:06:39.504658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.973 [2024-12-11 10:06:39.504672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.973 [2024-12-11 10:06:39.504678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.973 [2024-12-11 10:06:39.504684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.973 [2024-12-11 10:06:39.504698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-11 10:06:39.514661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.973 [2024-12-11 10:06:39.514714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.973 [2024-12-11 10:06:39.514727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.973 [2024-12-11 10:06:39.514734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.973 [2024-12-11 10:06:39.514740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.973 [2024-12-11 10:06:39.514753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-11 10:06:39.524702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.973 [2024-12-11 10:06:39.524757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.973 [2024-12-11 10:06:39.524770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.973 [2024-12-11 10:06:39.524777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.973 [2024-12-11 10:06:39.524782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.973 [2024-12-11 10:06:39.524796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.973 qpair failed and we were unable to recover it. 00:28:29.973 [2024-12-11 10:06:39.534721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.973 [2024-12-11 10:06:39.534773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.973 [2024-12-11 10:06:39.534786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.973 [2024-12-11 10:06:39.534793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.973 [2024-12-11 10:06:39.534799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:29.973 [2024-12-11 10:06:39.534812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.973 qpair failed and we were unable to recover it. 00:28:30.234 [2024-12-11 10:06:39.544746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.234 [2024-12-11 10:06:39.544819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.234 [2024-12-11 10:06:39.544836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.234 [2024-12-11 10:06:39.544843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.234 [2024-12-11 10:06:39.544849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.234 [2024-12-11 10:06:39.544864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.234 qpair failed and we were unable to recover it. 00:28:30.234 [2024-12-11 10:06:39.554772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.234 [2024-12-11 10:06:39.554828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.234 [2024-12-11 10:06:39.554845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.234 [2024-12-11 10:06:39.554852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.234 [2024-12-11 10:06:39.554858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.234 [2024-12-11 10:06:39.554874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.234 qpair failed and we were unable to recover it. 00:28:30.234 [2024-12-11 10:06:39.564777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.234 [2024-12-11 10:06:39.564866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.234 [2024-12-11 10:06:39.564879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.234 [2024-12-11 10:06:39.564886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.234 [2024-12-11 10:06:39.564891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.234 [2024-12-11 10:06:39.564906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.234 qpair failed and we were unable to recover it. 00:28:30.234 [2024-12-11 10:06:39.574834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.234 [2024-12-11 10:06:39.574887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.234 [2024-12-11 10:06:39.574901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.234 [2024-12-11 10:06:39.574907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.234 [2024-12-11 10:06:39.574913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.234 [2024-12-11 10:06:39.574927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.234 qpair failed and we were unable to recover it. 00:28:30.234 [2024-12-11 10:06:39.584873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.234 [2024-12-11 10:06:39.584926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.235 [2024-12-11 10:06:39.584939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.235 [2024-12-11 10:06:39.584951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.235 [2024-12-11 10:06:39.584957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.235 [2024-12-11 10:06:39.584970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.235 qpair failed and we were unable to recover it. 00:28:30.235 [2024-12-11 10:06:39.594899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.235 [2024-12-11 10:06:39.594953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.235 [2024-12-11 10:06:39.594967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.235 [2024-12-11 10:06:39.594973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.235 [2024-12-11 10:06:39.594979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.235 [2024-12-11 10:06:39.594993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.235 qpair failed and we were unable to recover it. 00:28:30.235 [2024-12-11 10:06:39.604849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.235 [2024-12-11 10:06:39.604906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.235 [2024-12-11 10:06:39.604919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.235 [2024-12-11 10:06:39.604926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.235 [2024-12-11 10:06:39.604931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.235 [2024-12-11 10:06:39.604945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.235 qpair failed and we were unable to recover it. 00:28:30.235 [2024-12-11 10:06:39.614946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.235 [2024-12-11 10:06:39.615053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.235 [2024-12-11 10:06:39.615067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.235 [2024-12-11 10:06:39.615074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.235 [2024-12-11 10:06:39.615080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.235 [2024-12-11 10:06:39.615094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.235 qpair failed and we were unable to recover it. 00:28:30.235 [2024-12-11 10:06:39.624972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.235 [2024-12-11 10:06:39.625028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.235 [2024-12-11 10:06:39.625042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.235 [2024-12-11 10:06:39.625048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.235 [2024-12-11 10:06:39.625054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.235 [2024-12-11 10:06:39.625071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.235 qpair failed and we were unable to recover it. 00:28:30.235 [2024-12-11 10:06:39.634994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.235 [2024-12-11 10:06:39.635047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.235 [2024-12-11 10:06:39.635060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.235 [2024-12-11 10:06:39.635067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.235 [2024-12-11 10:06:39.635073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.235 [2024-12-11 10:06:39.635086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.235 qpair failed and we were unable to recover it. 00:28:30.235 [2024-12-11 10:06:39.645031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.235 [2024-12-11 10:06:39.645085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.235 [2024-12-11 10:06:39.645098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.235 [2024-12-11 10:06:39.645105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.235 [2024-12-11 10:06:39.645111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.235 [2024-12-11 10:06:39.645124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.235 qpair failed and we were unable to recover it. 00:28:30.235 [2024-12-11 10:06:39.655045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.235 [2024-12-11 10:06:39.655101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.235 [2024-12-11 10:06:39.655114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.235 [2024-12-11 10:06:39.655120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.235 [2024-12-11 10:06:39.655126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.235 [2024-12-11 10:06:39.655140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.235 qpair failed and we were unable to recover it. 00:28:30.235 [2024-12-11 10:06:39.665080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.235 [2024-12-11 10:06:39.665130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.235 [2024-12-11 10:06:39.665143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.235 [2024-12-11 10:06:39.665149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.235 [2024-12-11 10:06:39.665156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.235 [2024-12-11 10:06:39.665170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.235 qpair failed and we were unable to recover it. 00:28:30.235 [2024-12-11 10:06:39.675111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.235 [2024-12-11 10:06:39.675207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.235 [2024-12-11 10:06:39.675225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.235 [2024-12-11 10:06:39.675231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.235 [2024-12-11 10:06:39.675237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.235 [2024-12-11 10:06:39.675250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.235 qpair failed and we were unable to recover it. 00:28:30.235 [2024-12-11 10:06:39.685148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.235 [2024-12-11 10:06:39.685203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.235 [2024-12-11 10:06:39.685220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.235 [2024-12-11 10:06:39.685227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.235 [2024-12-11 10:06:39.685233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.235 [2024-12-11 10:06:39.685246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.235 qpair failed and we were unable to recover it. 00:28:30.235 [2024-12-11 10:06:39.695166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.235 [2024-12-11 10:06:39.695225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.235 [2024-12-11 10:06:39.695238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.236 [2024-12-11 10:06:39.695245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.236 [2024-12-11 10:06:39.695251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.236 [2024-12-11 10:06:39.695265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.236 qpair failed and we were unable to recover it. 00:28:30.236 [2024-12-11 10:06:39.705184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.236 [2024-12-11 10:06:39.705240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.236 [2024-12-11 10:06:39.705253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.236 [2024-12-11 10:06:39.705259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.236 [2024-12-11 10:06:39.705265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.236 [2024-12-11 10:06:39.705278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.236 qpair failed and we were unable to recover it. 00:28:30.236 [2024-12-11 10:06:39.715272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.236 [2024-12-11 10:06:39.715374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.236 [2024-12-11 10:06:39.715387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.236 [2024-12-11 10:06:39.715397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.236 [2024-12-11 10:06:39.715403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.236 [2024-12-11 10:06:39.715417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.236 qpair failed and we were unable to recover it. 00:28:30.236 [2024-12-11 10:06:39.725247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.236 [2024-12-11 10:06:39.725301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.236 [2024-12-11 10:06:39.725313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.236 [2024-12-11 10:06:39.725320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.236 [2024-12-11 10:06:39.725326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.236 [2024-12-11 10:06:39.725340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.236 qpair failed and we were unable to recover it. 00:28:30.236 [2024-12-11 10:06:39.735278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.236 [2024-12-11 10:06:39.735331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.236 [2024-12-11 10:06:39.735345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.236 [2024-12-11 10:06:39.735351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.236 [2024-12-11 10:06:39.735357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.236 [2024-12-11 10:06:39.735371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.236 qpair failed and we were unable to recover it. 00:28:30.236 [2024-12-11 10:06:39.745336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.236 [2024-12-11 10:06:39.745389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.236 [2024-12-11 10:06:39.745402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.236 [2024-12-11 10:06:39.745409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.236 [2024-12-11 10:06:39.745415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.236 [2024-12-11 10:06:39.745428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.236 qpair failed and we were unable to recover it. 00:28:30.236 [2024-12-11 10:06:39.755335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.236 [2024-12-11 10:06:39.755383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.236 [2024-12-11 10:06:39.755397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.236 [2024-12-11 10:06:39.755404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.236 [2024-12-11 10:06:39.755410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.236 [2024-12-11 10:06:39.755427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.236 qpair failed and we were unable to recover it. 00:28:30.236 [2024-12-11 10:06:39.765311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.236 [2024-12-11 10:06:39.765366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.236 [2024-12-11 10:06:39.765378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.236 [2024-12-11 10:06:39.765385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.236 [2024-12-11 10:06:39.765391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.236 [2024-12-11 10:06:39.765405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.236 qpair failed and we were unable to recover it. 00:28:30.236 [2024-12-11 10:06:39.775385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.236 [2024-12-11 10:06:39.775440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.236 [2024-12-11 10:06:39.775453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.236 [2024-12-11 10:06:39.775460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.236 [2024-12-11 10:06:39.775465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.236 [2024-12-11 10:06:39.775479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.236 qpair failed and we were unable to recover it. 00:28:30.236 [2024-12-11 10:06:39.785422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.236 [2024-12-11 10:06:39.785475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.236 [2024-12-11 10:06:39.785488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.236 [2024-12-11 10:06:39.785494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.236 [2024-12-11 10:06:39.785500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.236 [2024-12-11 10:06:39.785514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.236 qpair failed and we were unable to recover it. 00:28:30.236 [2024-12-11 10:06:39.795477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.236 [2024-12-11 10:06:39.795531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.236 [2024-12-11 10:06:39.795544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.236 [2024-12-11 10:06:39.795550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.236 [2024-12-11 10:06:39.795556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.236 [2024-12-11 10:06:39.795570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.236 qpair failed and we were unable to recover it. 00:28:30.236 [2024-12-11 10:06:39.805522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.236 [2024-12-11 10:06:39.805605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.236 [2024-12-11 10:06:39.805621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.236 [2024-12-11 10:06:39.805628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.236 [2024-12-11 10:06:39.805634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.236 [2024-12-11 10:06:39.805649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.236 qpair failed and we were unable to recover it. 00:28:30.497 [2024-12-11 10:06:39.815439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.497 [2024-12-11 10:06:39.815531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.497 [2024-12-11 10:06:39.815548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.497 [2024-12-11 10:06:39.815555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.497 [2024-12-11 10:06:39.815561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.497 [2024-12-11 10:06:39.815576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.497 qpair failed and we were unable to recover it. 00:28:30.497 [2024-12-11 10:06:39.825532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.497 [2024-12-11 10:06:39.825581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.497 [2024-12-11 10:06:39.825594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.497 [2024-12-11 10:06:39.825601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.497 [2024-12-11 10:06:39.825607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.497 [2024-12-11 10:06:39.825621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.497 qpair failed and we were unable to recover it. 00:28:30.497 [2024-12-11 10:06:39.835575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.497 [2024-12-11 10:06:39.835625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.497 [2024-12-11 10:06:39.835638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.497 [2024-12-11 10:06:39.835644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.497 [2024-12-11 10:06:39.835650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.497 [2024-12-11 10:06:39.835664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.497 qpair failed and we were unable to recover it. 00:28:30.497 [2024-12-11 10:06:39.845583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.497 [2024-12-11 10:06:39.845640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.497 [2024-12-11 10:06:39.845652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.497 [2024-12-11 10:06:39.845662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.497 [2024-12-11 10:06:39.845668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.498 [2024-12-11 10:06:39.845683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.498 qpair failed and we were unable to recover it. 00:28:30.498 [2024-12-11 10:06:39.855619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.498 [2024-12-11 10:06:39.855674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.498 [2024-12-11 10:06:39.855687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.498 [2024-12-11 10:06:39.855694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.498 [2024-12-11 10:06:39.855699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.498 [2024-12-11 10:06:39.855713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.498 qpair failed and we were unable to recover it. 00:28:30.498 [2024-12-11 10:06:39.865645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.498 [2024-12-11 10:06:39.865701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.498 [2024-12-11 10:06:39.865714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.498 [2024-12-11 10:06:39.865721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.498 [2024-12-11 10:06:39.865727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.498 [2024-12-11 10:06:39.865741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.498 qpair failed and we were unable to recover it. 00:28:30.498 [2024-12-11 10:06:39.875716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.498 [2024-12-11 10:06:39.875779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.498 [2024-12-11 10:06:39.875792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.498 [2024-12-11 10:06:39.875799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.498 [2024-12-11 10:06:39.875805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.498 [2024-12-11 10:06:39.875819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.498 qpair failed and we were unable to recover it. 00:28:30.498 [2024-12-11 10:06:39.885716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.498 [2024-12-11 10:06:39.885770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.498 [2024-12-11 10:06:39.885783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.498 [2024-12-11 10:06:39.885790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.498 [2024-12-11 10:06:39.885796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.498 [2024-12-11 10:06:39.885813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.498 qpair failed and we were unable to recover it. 00:28:30.498 [2024-12-11 10:06:39.895711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.498 [2024-12-11 10:06:39.895765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.498 [2024-12-11 10:06:39.895779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.498 [2024-12-11 10:06:39.895785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.498 [2024-12-11 10:06:39.895791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.498 [2024-12-11 10:06:39.895805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.498 qpair failed and we were unable to recover it. 00:28:30.498 [2024-12-11 10:06:39.905755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.498 [2024-12-11 10:06:39.905808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.498 [2024-12-11 10:06:39.905821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.498 [2024-12-11 10:06:39.905828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.498 [2024-12-11 10:06:39.905834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.498 [2024-12-11 10:06:39.905847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.498 qpair failed and we were unable to recover it. 00:28:30.498 [2024-12-11 10:06:39.915797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.498 [2024-12-11 10:06:39.915865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.498 [2024-12-11 10:06:39.915878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.498 [2024-12-11 10:06:39.915885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.498 [2024-12-11 10:06:39.915891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.498 [2024-12-11 10:06:39.915905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.498 qpair failed and we were unable to recover it. 00:28:30.498 [2024-12-11 10:06:39.925833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.498 [2024-12-11 10:06:39.925889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.498 [2024-12-11 10:06:39.925902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.498 [2024-12-11 10:06:39.925909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.498 [2024-12-11 10:06:39.925915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.498 [2024-12-11 10:06:39.925930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.498 qpair failed and we were unable to recover it. 00:28:30.498 [2024-12-11 10:06:39.935833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.498 [2024-12-11 10:06:39.935895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.498 [2024-12-11 10:06:39.935908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.498 [2024-12-11 10:06:39.935915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.498 [2024-12-11 10:06:39.935921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.498 [2024-12-11 10:06:39.935934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.498 qpair failed and we were unable to recover it. 00:28:30.498 [2024-12-11 10:06:39.945860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.498 [2024-12-11 10:06:39.945914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.498 [2024-12-11 10:06:39.945926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.498 [2024-12-11 10:06:39.945933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.498 [2024-12-11 10:06:39.945939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.498 [2024-12-11 10:06:39.945951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.498 qpair failed and we were unable to recover it. 00:28:30.498 [2024-12-11 10:06:39.955884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.498 [2024-12-11 10:06:39.955940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.498 [2024-12-11 10:06:39.955954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.498 [2024-12-11 10:06:39.955962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.498 [2024-12-11 10:06:39.955967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.498 [2024-12-11 10:06:39.955982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.498 qpair failed and we were unable to recover it. 00:28:30.498 [2024-12-11 10:06:39.965971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.498 [2024-12-11 10:06:39.966064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.498 [2024-12-11 10:06:39.966077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.498 [2024-12-11 10:06:39.966084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.498 [2024-12-11 10:06:39.966090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.498 [2024-12-11 10:06:39.966103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.498 qpair failed and we were unable to recover it. 00:28:30.498 [2024-12-11 10:06:39.975888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.499 [2024-12-11 10:06:39.975947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.499 [2024-12-11 10:06:39.975960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.499 [2024-12-11 10:06:39.975970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.499 [2024-12-11 10:06:39.975976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.499 [2024-12-11 10:06:39.975989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.499 qpair failed and we were unable to recover it. 00:28:30.499 [2024-12-11 10:06:39.985904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.499 [2024-12-11 10:06:39.985965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.499 [2024-12-11 10:06:39.985977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.499 [2024-12-11 10:06:39.985984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.499 [2024-12-11 10:06:39.985990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.499 [2024-12-11 10:06:39.986003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.499 qpair failed and we were unable to recover it. 00:28:30.499 [2024-12-11 10:06:39.996018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.499 [2024-12-11 10:06:39.996073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.499 [2024-12-11 10:06:39.996087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.499 [2024-12-11 10:06:39.996094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.499 [2024-12-11 10:06:39.996100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.499 [2024-12-11 10:06:39.996113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.499 qpair failed and we were unable to recover it. 00:28:30.499 [2024-12-11 10:06:40.006124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.499 [2024-12-11 10:06:40.006210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.499 [2024-12-11 10:06:40.006228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.499 [2024-12-11 10:06:40.006235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.499 [2024-12-11 10:06:40.006241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.499 [2024-12-11 10:06:40.006255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.499 qpair failed and we were unable to recover it. 00:28:30.499 [2024-12-11 10:06:40.016100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.499 [2024-12-11 10:06:40.016161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.499 [2024-12-11 10:06:40.016176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.499 [2024-12-11 10:06:40.016183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.499 [2024-12-11 10:06:40.016190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.499 [2024-12-11 10:06:40.016208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.499 qpair failed and we were unable to recover it. 00:28:30.499 [2024-12-11 10:06:40.026117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.499 [2024-12-11 10:06:40.026174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.499 [2024-12-11 10:06:40.026188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.499 [2024-12-11 10:06:40.026196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.499 [2024-12-11 10:06:40.026202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.499 [2024-12-11 10:06:40.026221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.499 qpair failed and we were unable to recover it. 00:28:30.499 [2024-12-11 10:06:40.036163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.499 [2024-12-11 10:06:40.036230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.499 [2024-12-11 10:06:40.036244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.499 [2024-12-11 10:06:40.036251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.499 [2024-12-11 10:06:40.036257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.499 [2024-12-11 10:06:40.036271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.499 qpair failed and we were unable to recover it. 00:28:30.499 [2024-12-11 10:06:40.046172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.499 [2024-12-11 10:06:40.046238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.499 [2024-12-11 10:06:40.046254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.499 [2024-12-11 10:06:40.046261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.499 [2024-12-11 10:06:40.046267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.499 [2024-12-11 10:06:40.046283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.499 qpair failed and we were unable to recover it. 00:28:30.499 [2024-12-11 10:06:40.056214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.499 [2024-12-11 10:06:40.056281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.499 [2024-12-11 10:06:40.056294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.499 [2024-12-11 10:06:40.056301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.499 [2024-12-11 10:06:40.056307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.499 [2024-12-11 10:06:40.056321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.499 qpair failed and we were unable to recover it. 00:28:30.499 [2024-12-11 10:06:40.066243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.499 [2024-12-11 10:06:40.066301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.499 [2024-12-11 10:06:40.066318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.499 [2024-12-11 10:06:40.066325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.499 [2024-12-11 10:06:40.066331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.499 [2024-12-11 10:06:40.066346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.499 qpair failed and we were unable to recover it. 00:28:30.760 [2024-12-11 10:06:40.076267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.760 [2024-12-11 10:06:40.076321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.760 [2024-12-11 10:06:40.076338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.760 [2024-12-11 10:06:40.076345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.760 [2024-12-11 10:06:40.076351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.760 [2024-12-11 10:06:40.076367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.760 qpair failed and we were unable to recover it. 00:28:30.760 [2024-12-11 10:06:40.086284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.760 [2024-12-11 10:06:40.086360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.760 [2024-12-11 10:06:40.086377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.760 [2024-12-11 10:06:40.086385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.760 [2024-12-11 10:06:40.086391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.760 [2024-12-11 10:06:40.086406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.760 qpair failed and we were unable to recover it. 00:28:30.760 [2024-12-11 10:06:40.096321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.760 [2024-12-11 10:06:40.096381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.760 [2024-12-11 10:06:40.096395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.760 [2024-12-11 10:06:40.096402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.760 [2024-12-11 10:06:40.096408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.760 [2024-12-11 10:06:40.096422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.760 qpair failed and we were unable to recover it. 00:28:30.760 [2024-12-11 10:06:40.106298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.760 [2024-12-11 10:06:40.106354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.760 [2024-12-11 10:06:40.106367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.760 [2024-12-11 10:06:40.106378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.760 [2024-12-11 10:06:40.106384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.760 [2024-12-11 10:06:40.106398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.760 qpair failed and we were unable to recover it. 00:28:30.760 [2024-12-11 10:06:40.116387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.760 [2024-12-11 10:06:40.116459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.760 [2024-12-11 10:06:40.116473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.760 [2024-12-11 10:06:40.116480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.760 [2024-12-11 10:06:40.116486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.760 [2024-12-11 10:06:40.116500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.760 qpair failed and we were unable to recover it. 00:28:30.760 [2024-12-11 10:06:40.126395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.760 [2024-12-11 10:06:40.126494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.760 [2024-12-11 10:06:40.126507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.760 [2024-12-11 10:06:40.126514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.760 [2024-12-11 10:06:40.126520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.760 [2024-12-11 10:06:40.126534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.760 qpair failed and we were unable to recover it. 00:28:30.760 [2024-12-11 10:06:40.136452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.760 [2024-12-11 10:06:40.136509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.760 [2024-12-11 10:06:40.136521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.760 [2024-12-11 10:06:40.136528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.760 [2024-12-11 10:06:40.136534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.760 [2024-12-11 10:06:40.136549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.760 qpair failed and we were unable to recover it. 00:28:30.760 [2024-12-11 10:06:40.146402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.760 [2024-12-11 10:06:40.146455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.760 [2024-12-11 10:06:40.146468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.760 [2024-12-11 10:06:40.146475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.760 [2024-12-11 10:06:40.146481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.760 [2024-12-11 10:06:40.146497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.760 qpair failed and we were unable to recover it. 00:28:30.760 [2024-12-11 10:06:40.156412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.760 [2024-12-11 10:06:40.156462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.760 [2024-12-11 10:06:40.156477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.760 [2024-12-11 10:06:40.156484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.760 [2024-12-11 10:06:40.156490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.760 [2024-12-11 10:06:40.156505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.760 qpair failed and we were unable to recover it. 00:28:30.760 [2024-12-11 10:06:40.166513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.761 [2024-12-11 10:06:40.166567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.761 [2024-12-11 10:06:40.166580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.761 [2024-12-11 10:06:40.166587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.761 [2024-12-11 10:06:40.166593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.761 [2024-12-11 10:06:40.166606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.761 qpair failed and we were unable to recover it. 00:28:30.761 [2024-12-11 10:06:40.176609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.761 [2024-12-11 10:06:40.176665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.761 [2024-12-11 10:06:40.176678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.761 [2024-12-11 10:06:40.176685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.761 [2024-12-11 10:06:40.176691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.761 [2024-12-11 10:06:40.176705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.761 qpair failed and we were unable to recover it. 00:28:30.761 [2024-12-11 10:06:40.186569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.761 [2024-12-11 10:06:40.186623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.761 [2024-12-11 10:06:40.186637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.761 [2024-12-11 10:06:40.186643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.761 [2024-12-11 10:06:40.186649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.761 [2024-12-11 10:06:40.186663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.761 qpair failed and we were unable to recover it. 00:28:30.761 [2024-12-11 10:06:40.196616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.761 [2024-12-11 10:06:40.196668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.761 [2024-12-11 10:06:40.196682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.761 [2024-12-11 10:06:40.196688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.761 [2024-12-11 10:06:40.196694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.761 [2024-12-11 10:06:40.196708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.761 qpair failed and we were unable to recover it. 00:28:30.761 [2024-12-11 10:06:40.206660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.761 [2024-12-11 10:06:40.206718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.761 [2024-12-11 10:06:40.206732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.761 [2024-12-11 10:06:40.206739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.761 [2024-12-11 10:06:40.206745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.761 [2024-12-11 10:06:40.206759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.761 qpair failed and we were unable to recover it. 00:28:30.761 [2024-12-11 10:06:40.216643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.761 [2024-12-11 10:06:40.216695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.761 [2024-12-11 10:06:40.216708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.761 [2024-12-11 10:06:40.216715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.761 [2024-12-11 10:06:40.216721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.761 [2024-12-11 10:06:40.216735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.761 qpair failed and we were unable to recover it. 00:28:30.761 [2024-12-11 10:06:40.226636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.761 [2024-12-11 10:06:40.226721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.761 [2024-12-11 10:06:40.226736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.761 [2024-12-11 10:06:40.226743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.761 [2024-12-11 10:06:40.226749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.761 [2024-12-11 10:06:40.226763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.761 qpair failed and we were unable to recover it. 00:28:30.761 [2024-12-11 10:06:40.236732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.761 [2024-12-11 10:06:40.236788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.761 [2024-12-11 10:06:40.236800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.761 [2024-12-11 10:06:40.236811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.761 [2024-12-11 10:06:40.236817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.761 [2024-12-11 10:06:40.236831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.761 qpair failed and we were unable to recover it. 00:28:30.761 [2024-12-11 10:06:40.246737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.761 [2024-12-11 10:06:40.246791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.761 [2024-12-11 10:06:40.246804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.761 [2024-12-11 10:06:40.246811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.761 [2024-12-11 10:06:40.246817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.761 [2024-12-11 10:06:40.246830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.761 qpair failed and we were unable to recover it. 00:28:30.761 [2024-12-11 10:06:40.256739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.761 [2024-12-11 10:06:40.256790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.761 [2024-12-11 10:06:40.256803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.761 [2024-12-11 10:06:40.256810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.761 [2024-12-11 10:06:40.256816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.761 [2024-12-11 10:06:40.256830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.761 qpair failed and we were unable to recover it. 00:28:30.761 [2024-12-11 10:06:40.266761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.761 [2024-12-11 10:06:40.266823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.761 [2024-12-11 10:06:40.266836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.761 [2024-12-11 10:06:40.266842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.761 [2024-12-11 10:06:40.266848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.761 [2024-12-11 10:06:40.266862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.761 qpair failed and we were unable to recover it. 00:28:30.761 [2024-12-11 10:06:40.276818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.761 [2024-12-11 10:06:40.276873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.761 [2024-12-11 10:06:40.276886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.761 [2024-12-11 10:06:40.276893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.761 [2024-12-11 10:06:40.276898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.761 [2024-12-11 10:06:40.276916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.761 qpair failed and we were unable to recover it. 00:28:30.761 [2024-12-11 10:06:40.286774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.761 [2024-12-11 10:06:40.286831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.761 [2024-12-11 10:06:40.286845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.761 [2024-12-11 10:06:40.286852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.761 [2024-12-11 10:06:40.286858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.761 [2024-12-11 10:06:40.286872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.761 qpair failed and we were unable to recover it. 00:28:30.761 [2024-12-11 10:06:40.296862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.761 [2024-12-11 10:06:40.296916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.761 [2024-12-11 10:06:40.296929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.762 [2024-12-11 10:06:40.296936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.762 [2024-12-11 10:06:40.296942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.762 [2024-12-11 10:06:40.296956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.762 qpair failed and we were unable to recover it. 00:28:30.762 [2024-12-11 10:06:40.306929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.762 [2024-12-11 10:06:40.306996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.762 [2024-12-11 10:06:40.307009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.762 [2024-12-11 10:06:40.307016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.762 [2024-12-11 10:06:40.307023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.762 [2024-12-11 10:06:40.307036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.762 qpair failed and we were unable to recover it. 00:28:30.762 [2024-12-11 10:06:40.316946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.762 [2024-12-11 10:06:40.317000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.762 [2024-12-11 10:06:40.317013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.762 [2024-12-11 10:06:40.317019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.762 [2024-12-11 10:06:40.317025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.762 [2024-12-11 10:06:40.317039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.762 qpair failed and we were unable to recover it. 00:28:30.762 [2024-12-11 10:06:40.326947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.762 [2024-12-11 10:06:40.327008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.762 [2024-12-11 10:06:40.327020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.762 [2024-12-11 10:06:40.327027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.762 [2024-12-11 10:06:40.327033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:30.762 [2024-12-11 10:06:40.327046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.762 qpair failed and we were unable to recover it. 00:28:31.022 [2024-12-11 10:06:40.336977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.022 [2024-12-11 10:06:40.337036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.022 [2024-12-11 10:06:40.337054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.022 [2024-12-11 10:06:40.337062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.022 [2024-12-11 10:06:40.337068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.022 [2024-12-11 10:06:40.337084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.022 qpair failed and we were unable to recover it. 00:28:31.022 [2024-12-11 10:06:40.346973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.022 [2024-12-11 10:06:40.347026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.022 [2024-12-11 10:06:40.347042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.022 [2024-12-11 10:06:40.347050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.022 [2024-12-11 10:06:40.347056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.022 [2024-12-11 10:06:40.347071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.022 qpair failed and we were unable to recover it. 00:28:31.022 [2024-12-11 10:06:40.357074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.022 [2024-12-11 10:06:40.357135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.022 [2024-12-11 10:06:40.357150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.022 [2024-12-11 10:06:40.357157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.022 [2024-12-11 10:06:40.357163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.022 [2024-12-11 10:06:40.357178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.022 qpair failed and we were unable to recover it. 00:28:31.022 [2024-12-11 10:06:40.367071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.022 [2024-12-11 10:06:40.367126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.022 [2024-12-11 10:06:40.367139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.023 [2024-12-11 10:06:40.367150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.023 [2024-12-11 10:06:40.367156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.023 [2024-12-11 10:06:40.367170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.023 qpair failed and we were unable to recover it. 00:28:31.023 [2024-12-11 10:06:40.377114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.023 [2024-12-11 10:06:40.377194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.023 [2024-12-11 10:06:40.377208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.023 [2024-12-11 10:06:40.377215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.023 [2024-12-11 10:06:40.377224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.023 [2024-12-11 10:06:40.377239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.023 qpair failed and we were unable to recover it. 00:28:31.023 [2024-12-11 10:06:40.387105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.023 [2024-12-11 10:06:40.387156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.023 [2024-12-11 10:06:40.387169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.023 [2024-12-11 10:06:40.387176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.023 [2024-12-11 10:06:40.387182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.023 [2024-12-11 10:06:40.387195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.023 qpair failed and we were unable to recover it. 00:28:31.023 [2024-12-11 10:06:40.397134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.023 [2024-12-11 10:06:40.397194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.023 [2024-12-11 10:06:40.397208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.023 [2024-12-11 10:06:40.397215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.023 [2024-12-11 10:06:40.397225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.023 [2024-12-11 10:06:40.397239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.023 qpair failed and we were unable to recover it. 00:28:31.023 [2024-12-11 10:06:40.407190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.023 [2024-12-11 10:06:40.407255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.023 [2024-12-11 10:06:40.407268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.023 [2024-12-11 10:06:40.407275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.023 [2024-12-11 10:06:40.407281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.023 [2024-12-11 10:06:40.407298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.023 qpair failed and we were unable to recover it. 00:28:31.023 [2024-12-11 10:06:40.417197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.023 [2024-12-11 10:06:40.417259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.023 [2024-12-11 10:06:40.417272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.023 [2024-12-11 10:06:40.417279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.023 [2024-12-11 10:06:40.417285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.023 [2024-12-11 10:06:40.417299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.023 qpair failed and we were unable to recover it. 00:28:31.023 [2024-12-11 10:06:40.427231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.023 [2024-12-11 10:06:40.427281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.023 [2024-12-11 10:06:40.427294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.023 [2024-12-11 10:06:40.427301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.023 [2024-12-11 10:06:40.427307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.023 [2024-12-11 10:06:40.427320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.023 qpair failed and we were unable to recover it. 00:28:31.023 [2024-12-11 10:06:40.437262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.023 [2024-12-11 10:06:40.437314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.023 [2024-12-11 10:06:40.437327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.023 [2024-12-11 10:06:40.437334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.023 [2024-12-11 10:06:40.437340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.023 [2024-12-11 10:06:40.437355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.023 qpair failed and we were unable to recover it. 00:28:31.023 [2024-12-11 10:06:40.447279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.023 [2024-12-11 10:06:40.447333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.023 [2024-12-11 10:06:40.447346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.023 [2024-12-11 10:06:40.447353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.023 [2024-12-11 10:06:40.447359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.023 [2024-12-11 10:06:40.447372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.023 qpair failed and we were unable to recover it. 00:28:31.023 [2024-12-11 10:06:40.457322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.023 [2024-12-11 10:06:40.457388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.023 [2024-12-11 10:06:40.457400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.023 [2024-12-11 10:06:40.457407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.023 [2024-12-11 10:06:40.457413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.023 [2024-12-11 10:06:40.457427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.023 qpair failed and we were unable to recover it. 00:28:31.023 [2024-12-11 10:06:40.467335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.023 [2024-12-11 10:06:40.467388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.023 [2024-12-11 10:06:40.467401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.023 [2024-12-11 10:06:40.467408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.023 [2024-12-11 10:06:40.467414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.023 [2024-12-11 10:06:40.467427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.023 qpair failed and we were unable to recover it. 00:28:31.023 [2024-12-11 10:06:40.477346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.023 [2024-12-11 10:06:40.477402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.023 [2024-12-11 10:06:40.477416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.023 [2024-12-11 10:06:40.477423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.023 [2024-12-11 10:06:40.477429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.023 [2024-12-11 10:06:40.477443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.023 qpair failed and we were unable to recover it. 00:28:31.023 [2024-12-11 10:06:40.487356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.023 [2024-12-11 10:06:40.487410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.023 [2024-12-11 10:06:40.487423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.023 [2024-12-11 10:06:40.487430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.023 [2024-12-11 10:06:40.487436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.023 [2024-12-11 10:06:40.487449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.023 qpair failed and we were unable to recover it. 00:28:31.023 [2024-12-11 10:06:40.497419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.023 [2024-12-11 10:06:40.497474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.023 [2024-12-11 10:06:40.497487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.023 [2024-12-11 10:06:40.497500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.023 [2024-12-11 10:06:40.497506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.023 [2024-12-11 10:06:40.497520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.024 qpair failed and we were unable to recover it. 00:28:31.024 [2024-12-11 10:06:40.507431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.024 [2024-12-11 10:06:40.507487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.024 [2024-12-11 10:06:40.507500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.024 [2024-12-11 10:06:40.507507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.024 [2024-12-11 10:06:40.507513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.024 [2024-12-11 10:06:40.507526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.024 qpair failed and we were unable to recover it. 00:28:31.024 [2024-12-11 10:06:40.517407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.024 [2024-12-11 10:06:40.517454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.024 [2024-12-11 10:06:40.517467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.024 [2024-12-11 10:06:40.517473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.024 [2024-12-11 10:06:40.517479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.024 [2024-12-11 10:06:40.517493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.024 qpair failed and we were unable to recover it. 00:28:31.024 [2024-12-11 10:06:40.527528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.024 [2024-12-11 10:06:40.527627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.024 [2024-12-11 10:06:40.527640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.024 [2024-12-11 10:06:40.527647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.024 [2024-12-11 10:06:40.527653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.024 [2024-12-11 10:06:40.527667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.024 qpair failed and we were unable to recover it. 00:28:31.024 [2024-12-11 10:06:40.537528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.024 [2024-12-11 10:06:40.537583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.024 [2024-12-11 10:06:40.537597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.024 [2024-12-11 10:06:40.537604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.024 [2024-12-11 10:06:40.537611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.024 [2024-12-11 10:06:40.537628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.024 qpair failed and we were unable to recover it. 00:28:31.024 [2024-12-11 10:06:40.547554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.024 [2024-12-11 10:06:40.547605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.024 [2024-12-11 10:06:40.547619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.024 [2024-12-11 10:06:40.547626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.024 [2024-12-11 10:06:40.547632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.024 [2024-12-11 10:06:40.547646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.024 qpair failed and we were unable to recover it. 00:28:31.024 [2024-12-11 10:06:40.557610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.024 [2024-12-11 10:06:40.557710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.024 [2024-12-11 10:06:40.557723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.024 [2024-12-11 10:06:40.557730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.024 [2024-12-11 10:06:40.557736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.024 [2024-12-11 10:06:40.557749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.024 qpair failed and we were unable to recover it. 00:28:31.024 [2024-12-11 10:06:40.567634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.024 [2024-12-11 10:06:40.567689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.024 [2024-12-11 10:06:40.567702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.024 [2024-12-11 10:06:40.567708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.024 [2024-12-11 10:06:40.567715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.024 [2024-12-11 10:06:40.567728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.024 qpair failed and we were unable to recover it. 00:28:31.024 [2024-12-11 10:06:40.577663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.024 [2024-12-11 10:06:40.577752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.024 [2024-12-11 10:06:40.577765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.024 [2024-12-11 10:06:40.577771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.024 [2024-12-11 10:06:40.577777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.024 [2024-12-11 10:06:40.577791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.024 qpair failed and we were unable to recover it. 00:28:31.024 [2024-12-11 10:06:40.587722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.024 [2024-12-11 10:06:40.587815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.024 [2024-12-11 10:06:40.587828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.024 [2024-12-11 10:06:40.587835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.024 [2024-12-11 10:06:40.587841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.024 [2024-12-11 10:06:40.587855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.024 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-11 10:06:40.597700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.285 [2024-12-11 10:06:40.597756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.285 [2024-12-11 10:06:40.597773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.285 [2024-12-11 10:06:40.597780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.285 [2024-12-11 10:06:40.597786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.285 [2024-12-11 10:06:40.597801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-11 10:06:40.607734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.285 [2024-12-11 10:06:40.607786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.285 [2024-12-11 10:06:40.607802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.285 [2024-12-11 10:06:40.607809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.285 [2024-12-11 10:06:40.607815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.285 [2024-12-11 10:06:40.607830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-11 10:06:40.617772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.285 [2024-12-11 10:06:40.617821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.285 [2024-12-11 10:06:40.617835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.285 [2024-12-11 10:06:40.617842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.285 [2024-12-11 10:06:40.617848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.285 [2024-12-11 10:06:40.617862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-11 10:06:40.627787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.285 [2024-12-11 10:06:40.627841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.285 [2024-12-11 10:06:40.627855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.285 [2024-12-11 10:06:40.627865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.285 [2024-12-11 10:06:40.627871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.285 [2024-12-11 10:06:40.627885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-11 10:06:40.637807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.285 [2024-12-11 10:06:40.637860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.285 [2024-12-11 10:06:40.637873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.285 [2024-12-11 10:06:40.637881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.285 [2024-12-11 10:06:40.637887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.285 [2024-12-11 10:06:40.637901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-11 10:06:40.647853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.285 [2024-12-11 10:06:40.647908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.285 [2024-12-11 10:06:40.647921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.285 [2024-12-11 10:06:40.647927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.285 [2024-12-11 10:06:40.647933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.285 [2024-12-11 10:06:40.647947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-11 10:06:40.657802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.285 [2024-12-11 10:06:40.657858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.285 [2024-12-11 10:06:40.657871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.285 [2024-12-11 10:06:40.657877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.285 [2024-12-11 10:06:40.657883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.285 [2024-12-11 10:06:40.657897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-11 10:06:40.667893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.285 [2024-12-11 10:06:40.667948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.285 [2024-12-11 10:06:40.667961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.285 [2024-12-11 10:06:40.667968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.285 [2024-12-11 10:06:40.667973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.285 [2024-12-11 10:06:40.667990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-11 10:06:40.677921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.285 [2024-12-11 10:06:40.678009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.285 [2024-12-11 10:06:40.678022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.285 [2024-12-11 10:06:40.678029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.285 [2024-12-11 10:06:40.678035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.285 [2024-12-11 10:06:40.678049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-11 10:06:40.687961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.285 [2024-12-11 10:06:40.688014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.285 [2024-12-11 10:06:40.688028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.285 [2024-12-11 10:06:40.688034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.285 [2024-12-11 10:06:40.688041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.285 [2024-12-11 10:06:40.688055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-11 10:06:40.698010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.285 [2024-12-11 10:06:40.698064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.285 [2024-12-11 10:06:40.698077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.285 [2024-12-11 10:06:40.698084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.285 [2024-12-11 10:06:40.698090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.285 [2024-12-11 10:06:40.698103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-11 10:06:40.707998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.285 [2024-12-11 10:06:40.708078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.285 [2024-12-11 10:06:40.708091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.285 [2024-12-11 10:06:40.708098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.285 [2024-12-11 10:06:40.708103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.285 [2024-12-11 10:06:40.708117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-11 10:06:40.718022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.285 [2024-12-11 10:06:40.718075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.285 [2024-12-11 10:06:40.718089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.286 [2024-12-11 10:06:40.718096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.286 [2024-12-11 10:06:40.718101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.286 [2024-12-11 10:06:40.718115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-11 10:06:40.728069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.286 [2024-12-11 10:06:40.728129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.286 [2024-12-11 10:06:40.728142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.286 [2024-12-11 10:06:40.728150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.286 [2024-12-11 10:06:40.728155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.286 [2024-12-11 10:06:40.728169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-11 10:06:40.738091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.286 [2024-12-11 10:06:40.738149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.286 [2024-12-11 10:06:40.738163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.286 [2024-12-11 10:06:40.738169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.286 [2024-12-11 10:06:40.738175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.286 [2024-12-11 10:06:40.738189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-11 10:06:40.748139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.286 [2024-12-11 10:06:40.748235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.286 [2024-12-11 10:06:40.748248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.286 [2024-12-11 10:06:40.748255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.286 [2024-12-11 10:06:40.748260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.286 [2024-12-11 10:06:40.748274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-11 10:06:40.758142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.286 [2024-12-11 10:06:40.758195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.286 [2024-12-11 10:06:40.758208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.286 [2024-12-11 10:06:40.758221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.286 [2024-12-11 10:06:40.758228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.286 [2024-12-11 10:06:40.758242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-11 10:06:40.768179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.286 [2024-12-11 10:06:40.768237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.286 [2024-12-11 10:06:40.768250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.286 [2024-12-11 10:06:40.768257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.286 [2024-12-11 10:06:40.768262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.286 [2024-12-11 10:06:40.768276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-11 10:06:40.778134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.286 [2024-12-11 10:06:40.778240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.286 [2024-12-11 10:06:40.778253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.286 [2024-12-11 10:06:40.778260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.286 [2024-12-11 10:06:40.778266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.286 [2024-12-11 10:06:40.778279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-11 10:06:40.788284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.286 [2024-12-11 10:06:40.788338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.286 [2024-12-11 10:06:40.788352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.286 [2024-12-11 10:06:40.788358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.286 [2024-12-11 10:06:40.788364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.286 [2024-12-11 10:06:40.788378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-11 10:06:40.798271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.286 [2024-12-11 10:06:40.798327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.286 [2024-12-11 10:06:40.798341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.286 [2024-12-11 10:06:40.798348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.286 [2024-12-11 10:06:40.798354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.286 [2024-12-11 10:06:40.798371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-11 10:06:40.808295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.286 [2024-12-11 10:06:40.808347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.286 [2024-12-11 10:06:40.808361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.286 [2024-12-11 10:06:40.808367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.286 [2024-12-11 10:06:40.808373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.286 [2024-12-11 10:06:40.808387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-11 10:06:40.818388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.286 [2024-12-11 10:06:40.818455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.286 [2024-12-11 10:06:40.818468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.286 [2024-12-11 10:06:40.818475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.286 [2024-12-11 10:06:40.818481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.286 [2024-12-11 10:06:40.818495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-11 10:06:40.828355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.286 [2024-12-11 10:06:40.828406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.286 [2024-12-11 10:06:40.828419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.286 [2024-12-11 10:06:40.828426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.286 [2024-12-11 10:06:40.828432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.286 [2024-12-11 10:06:40.828445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-11 10:06:40.838324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.286 [2024-12-11 10:06:40.838376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.286 [2024-12-11 10:06:40.838389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.286 [2024-12-11 10:06:40.838396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.286 [2024-12-11 10:06:40.838402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.286 [2024-12-11 10:06:40.838416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-11 10:06:40.848434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.286 [2024-12-11 10:06:40.848492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.286 [2024-12-11 10:06:40.848505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.286 [2024-12-11 10:06:40.848512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.286 [2024-12-11 10:06:40.848518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.287 [2024-12-11 10:06:40.848531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.547 [2024-12-11 10:06:40.858494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.547 [2024-12-11 10:06:40.858559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.547 [2024-12-11 10:06:40.858575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.547 [2024-12-11 10:06:40.858583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.547 [2024-12-11 10:06:40.858589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.547 [2024-12-11 10:06:40.858605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.547 qpair failed and we were unable to recover it. 00:28:31.547 [2024-12-11 10:06:40.868485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.547 [2024-12-11 10:06:40.868550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.547 [2024-12-11 10:06:40.868566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.547 [2024-12-11 10:06:40.868573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.547 [2024-12-11 10:06:40.868579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.547 [2024-12-11 10:06:40.868595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.547 qpair failed and we were unable to recover it. 00:28:31.547 [2024-12-11 10:06:40.878520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.547 [2024-12-11 10:06:40.878576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.547 [2024-12-11 10:06:40.878590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.547 [2024-12-11 10:06:40.878597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.547 [2024-12-11 10:06:40.878603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.547 [2024-12-11 10:06:40.878617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.547 qpair failed and we were unable to recover it. 00:28:31.547 [2024-12-11 10:06:40.888478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.547 [2024-12-11 10:06:40.888542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.547 [2024-12-11 10:06:40.888555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.547 [2024-12-11 10:06:40.888565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.547 [2024-12-11 10:06:40.888571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.547 [2024-12-11 10:06:40.888585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.547 qpair failed and we were unable to recover it. 00:28:31.547 [2024-12-11 10:06:40.898579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.547 [2024-12-11 10:06:40.898630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.547 [2024-12-11 10:06:40.898644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.547 [2024-12-11 10:06:40.898650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.547 [2024-12-11 10:06:40.898656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.547 [2024-12-11 10:06:40.898670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.547 qpair failed and we were unable to recover it. 00:28:31.547 [2024-12-11 10:06:40.908604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.547 [2024-12-11 10:06:40.908655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.547 [2024-12-11 10:06:40.908669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.547 [2024-12-11 10:06:40.908675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.547 [2024-12-11 10:06:40.908681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.547 [2024-12-11 10:06:40.908695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.547 qpair failed and we were unable to recover it. 00:28:31.547 [2024-12-11 10:06:40.918655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.547 [2024-12-11 10:06:40.918722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.547 [2024-12-11 10:06:40.918735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.547 [2024-12-11 10:06:40.918741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.547 [2024-12-11 10:06:40.918747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.547 [2024-12-11 10:06:40.918760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.547 qpair failed and we were unable to recover it. 00:28:31.547 [2024-12-11 10:06:40.928675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.547 [2024-12-11 10:06:40.928729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.547 [2024-12-11 10:06:40.928741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.547 [2024-12-11 10:06:40.928748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.547 [2024-12-11 10:06:40.928754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.547 [2024-12-11 10:06:40.928771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.547 qpair failed and we were unable to recover it. 00:28:31.547 [2024-12-11 10:06:40.938706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.547 [2024-12-11 10:06:40.938805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.547 [2024-12-11 10:06:40.938818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.547 [2024-12-11 10:06:40.938824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.547 [2024-12-11 10:06:40.938830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.548 [2024-12-11 10:06:40.938844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.548 qpair failed and we were unable to recover it. 00:28:31.548 [2024-12-11 10:06:40.948708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.548 [2024-12-11 10:06:40.948755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.548 [2024-12-11 10:06:40.948768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.548 [2024-12-11 10:06:40.948774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.548 [2024-12-11 10:06:40.948780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.548 [2024-12-11 10:06:40.948794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.548 qpair failed and we were unable to recover it. 00:28:31.548 [2024-12-11 10:06:40.958749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.548 [2024-12-11 10:06:40.958803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.548 [2024-12-11 10:06:40.958816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.548 [2024-12-11 10:06:40.958823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.548 [2024-12-11 10:06:40.958829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.548 [2024-12-11 10:06:40.958843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.548 qpair failed and we were unable to recover it. 00:28:31.548 [2024-12-11 10:06:40.968769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.548 [2024-12-11 10:06:40.968839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.548 [2024-12-11 10:06:40.968852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.548 [2024-12-11 10:06:40.968858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.548 [2024-12-11 10:06:40.968864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.548 [2024-12-11 10:06:40.968878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.548 qpair failed and we were unable to recover it. 00:28:31.548 [2024-12-11 10:06:40.978821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.548 [2024-12-11 10:06:40.978878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.548 [2024-12-11 10:06:40.978891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.548 [2024-12-11 10:06:40.978898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.548 [2024-12-11 10:06:40.978904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.548 [2024-12-11 10:06:40.978917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.548 qpair failed and we were unable to recover it. 00:28:31.548 [2024-12-11 10:06:40.988825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.548 [2024-12-11 10:06:40.988876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.548 [2024-12-11 10:06:40.988889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.548 [2024-12-11 10:06:40.988896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.548 [2024-12-11 10:06:40.988902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.548 [2024-12-11 10:06:40.988915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.548 qpair failed and we were unable to recover it. 00:28:31.548 [2024-12-11 10:06:40.998787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.548 [2024-12-11 10:06:40.998839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.548 [2024-12-11 10:06:40.998852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.548 [2024-12-11 10:06:40.998859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.548 [2024-12-11 10:06:40.998865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.548 [2024-12-11 10:06:40.998878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.548 qpair failed and we were unable to recover it. 00:28:31.548 [2024-12-11 10:06:41.008903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.548 [2024-12-11 10:06:41.008960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.548 [2024-12-11 10:06:41.008973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.548 [2024-12-11 10:06:41.008979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.548 [2024-12-11 10:06:41.008985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.548 [2024-12-11 10:06:41.008999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.548 qpair failed and we were unable to recover it. 00:28:31.548 [2024-12-11 10:06:41.018921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.548 [2024-12-11 10:06:41.018978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.548 [2024-12-11 10:06:41.018991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.548 [2024-12-11 10:06:41.019001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.548 [2024-12-11 10:06:41.019007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.548 [2024-12-11 10:06:41.019022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.548 qpair failed and we were unable to recover it. 00:28:31.548 [2024-12-11 10:06:41.028945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.548 [2024-12-11 10:06:41.028998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.548 [2024-12-11 10:06:41.029011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.548 [2024-12-11 10:06:41.029018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.548 [2024-12-11 10:06:41.029024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.548 [2024-12-11 10:06:41.029038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.548 qpair failed and we were unable to recover it. 00:28:31.548 [2024-12-11 10:06:41.039008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.548 [2024-12-11 10:06:41.039060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.548 [2024-12-11 10:06:41.039073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.548 [2024-12-11 10:06:41.039080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.548 [2024-12-11 10:06:41.039086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.548 [2024-12-11 10:06:41.039099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.548 qpair failed and we were unable to recover it. 00:28:31.548 [2024-12-11 10:06:41.049011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.548 [2024-12-11 10:06:41.049064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.548 [2024-12-11 10:06:41.049077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.548 [2024-12-11 10:06:41.049084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.548 [2024-12-11 10:06:41.049090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.548 [2024-12-11 10:06:41.049104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.548 qpair failed and we were unable to recover it. 00:28:31.548 [2024-12-11 10:06:41.059081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.548 [2024-12-11 10:06:41.059138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.548 [2024-12-11 10:06:41.059152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.548 [2024-12-11 10:06:41.059158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.548 [2024-12-11 10:06:41.059164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.548 [2024-12-11 10:06:41.059181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.548 qpair failed and we were unable to recover it. 00:28:31.548 [2024-12-11 10:06:41.069083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.548 [2024-12-11 10:06:41.069148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.548 [2024-12-11 10:06:41.069161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.548 [2024-12-11 10:06:41.069168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.548 [2024-12-11 10:06:41.069174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.548 [2024-12-11 10:06:41.069188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.548 qpair failed and we were unable to recover it. 00:28:31.548 [2024-12-11 10:06:41.079139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.548 [2024-12-11 10:06:41.079196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.549 [2024-12-11 10:06:41.079209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.549 [2024-12-11 10:06:41.079219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.549 [2024-12-11 10:06:41.079226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.549 [2024-12-11 10:06:41.079240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.549 qpair failed and we were unable to recover it. 00:28:31.549 [2024-12-11 10:06:41.089134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.549 [2024-12-11 10:06:41.089189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.549 [2024-12-11 10:06:41.089202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.549 [2024-12-11 10:06:41.089208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.549 [2024-12-11 10:06:41.089214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.549 [2024-12-11 10:06:41.089232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.549 qpair failed and we were unable to recover it. 00:28:31.549 [2024-12-11 10:06:41.099149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.549 [2024-12-11 10:06:41.099205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.549 [2024-12-11 10:06:41.099220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.549 [2024-12-11 10:06:41.099227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.549 [2024-12-11 10:06:41.099233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.549 [2024-12-11 10:06:41.099247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.549 qpair failed and we were unable to recover it. 00:28:31.549 [2024-12-11 10:06:41.109108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.549 [2024-12-11 10:06:41.109175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.549 [2024-12-11 10:06:41.109189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.549 [2024-12-11 10:06:41.109196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.549 [2024-12-11 10:06:41.109201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.549 [2024-12-11 10:06:41.109215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.549 qpair failed and we were unable to recover it. 00:28:31.549 [2024-12-11 10:06:41.119252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.549 [2024-12-11 10:06:41.119310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.549 [2024-12-11 10:06:41.119328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.549 [2024-12-11 10:06:41.119336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.549 [2024-12-11 10:06:41.119341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.549 [2024-12-11 10:06:41.119358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.549 qpair failed and we were unable to recover it. 00:28:31.809 [2024-12-11 10:06:41.129266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.809 [2024-12-11 10:06:41.129320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.809 [2024-12-11 10:06:41.129337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.809 [2024-12-11 10:06:41.129344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.809 [2024-12-11 10:06:41.129350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.809 [2024-12-11 10:06:41.129366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.809 qpair failed and we were unable to recover it. 00:28:31.809 [2024-12-11 10:06:41.139284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.809 [2024-12-11 10:06:41.139340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.809 [2024-12-11 10:06:41.139353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.809 [2024-12-11 10:06:41.139360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.809 [2024-12-11 10:06:41.139366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.809 [2024-12-11 10:06:41.139381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.809 qpair failed and we were unable to recover it. 00:28:31.809 [2024-12-11 10:06:41.149319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.809 [2024-12-11 10:06:41.149372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.809 [2024-12-11 10:06:41.149385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.809 [2024-12-11 10:06:41.149395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.809 [2024-12-11 10:06:41.149401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.809 [2024-12-11 10:06:41.149415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.810 qpair failed and we were unable to recover it. 00:28:31.810 [2024-12-11 10:06:41.159351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.810 [2024-12-11 10:06:41.159400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.810 [2024-12-11 10:06:41.159414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.810 [2024-12-11 10:06:41.159421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.810 [2024-12-11 10:06:41.159427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.810 [2024-12-11 10:06:41.159441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.810 qpair failed and we were unable to recover it. 00:28:31.810 [2024-12-11 10:06:41.169401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.810 [2024-12-11 10:06:41.169475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.810 [2024-12-11 10:06:41.169488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.810 [2024-12-11 10:06:41.169495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.810 [2024-12-11 10:06:41.169501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.810 [2024-12-11 10:06:41.169515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.810 qpair failed and we were unable to recover it. 00:28:31.810 [2024-12-11 10:06:41.179423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.810 [2024-12-11 10:06:41.179486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.810 [2024-12-11 10:06:41.179498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.810 [2024-12-11 10:06:41.179505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.810 [2024-12-11 10:06:41.179511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.810 [2024-12-11 10:06:41.179524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.810 qpair failed and we were unable to recover it. 00:28:31.810 [2024-12-11 10:06:41.189459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.810 [2024-12-11 10:06:41.189510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.810 [2024-12-11 10:06:41.189523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.810 [2024-12-11 10:06:41.189530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.810 [2024-12-11 10:06:41.189536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.810 [2024-12-11 10:06:41.189554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.810 qpair failed and we were unable to recover it. 00:28:31.810 [2024-12-11 10:06:41.199447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.810 [2024-12-11 10:06:41.199501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.810 [2024-12-11 10:06:41.199514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.810 [2024-12-11 10:06:41.199521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.810 [2024-12-11 10:06:41.199527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.810 [2024-12-11 10:06:41.199540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.810 qpair failed and we were unable to recover it. 00:28:31.810 [2024-12-11 10:06:41.209500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.810 [2024-12-11 10:06:41.209556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.810 [2024-12-11 10:06:41.209569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.810 [2024-12-11 10:06:41.209575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.810 [2024-12-11 10:06:41.209581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.810 [2024-12-11 10:06:41.209594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.810 qpair failed and we were unable to recover it. 00:28:31.810 [2024-12-11 10:06:41.219508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.810 [2024-12-11 10:06:41.219562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.810 [2024-12-11 10:06:41.219575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.810 [2024-12-11 10:06:41.219582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.810 [2024-12-11 10:06:41.219588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.810 [2024-12-11 10:06:41.219601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.810 qpair failed and we were unable to recover it. 00:28:31.810 [2024-12-11 10:06:41.229524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.810 [2024-12-11 10:06:41.229576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.810 [2024-12-11 10:06:41.229590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.810 [2024-12-11 10:06:41.229596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.810 [2024-12-11 10:06:41.229603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.810 [2024-12-11 10:06:41.229616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.810 qpair failed and we were unable to recover it. 00:28:31.810 [2024-12-11 10:06:41.239577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.810 [2024-12-11 10:06:41.239648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.810 [2024-12-11 10:06:41.239661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.810 [2024-12-11 10:06:41.239667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.810 [2024-12-11 10:06:41.239673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.810 [2024-12-11 10:06:41.239687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.810 qpair failed and we were unable to recover it. 00:28:31.810 [2024-12-11 10:06:41.249613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.810 [2024-12-11 10:06:41.249667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.810 [2024-12-11 10:06:41.249680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.810 [2024-12-11 10:06:41.249687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.810 [2024-12-11 10:06:41.249693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.810 [2024-12-11 10:06:41.249706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.810 qpair failed and we were unable to recover it. 00:28:31.810 [2024-12-11 10:06:41.259632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.810 [2024-12-11 10:06:41.259682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.810 [2024-12-11 10:06:41.259695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.810 [2024-12-11 10:06:41.259702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.810 [2024-12-11 10:06:41.259708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.810 [2024-12-11 10:06:41.259721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.810 qpair failed and we were unable to recover it. 00:28:31.810 [2024-12-11 10:06:41.269659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.810 [2024-12-11 10:06:41.269714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.810 [2024-12-11 10:06:41.269727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.810 [2024-12-11 10:06:41.269734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.810 [2024-12-11 10:06:41.269739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.810 [2024-12-11 10:06:41.269753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.810 qpair failed and we were unable to recover it. 00:28:31.810 [2024-12-11 10:06:41.279691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.810 [2024-12-11 10:06:41.279747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.810 [2024-12-11 10:06:41.279760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.810 [2024-12-11 10:06:41.279769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.810 [2024-12-11 10:06:41.279775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.810 [2024-12-11 10:06:41.279789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.810 qpair failed and we were unable to recover it. 00:28:31.810 [2024-12-11 10:06:41.289717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.811 [2024-12-11 10:06:41.289772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.811 [2024-12-11 10:06:41.289785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.811 [2024-12-11 10:06:41.289792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.811 [2024-12-11 10:06:41.289797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.811 [2024-12-11 10:06:41.289811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.811 qpair failed and we were unable to recover it. 00:28:31.811 [2024-12-11 10:06:41.299746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.811 [2024-12-11 10:06:41.299801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.811 [2024-12-11 10:06:41.299814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.811 [2024-12-11 10:06:41.299820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.811 [2024-12-11 10:06:41.299826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.811 [2024-12-11 10:06:41.299839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.811 qpair failed and we were unable to recover it. 00:28:31.811 [2024-12-11 10:06:41.309781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.811 [2024-12-11 10:06:41.309832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.811 [2024-12-11 10:06:41.309844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.811 [2024-12-11 10:06:41.309851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.811 [2024-12-11 10:06:41.309857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.811 [2024-12-11 10:06:41.309870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.811 qpair failed and we were unable to recover it. 00:28:31.811 [2024-12-11 10:06:41.319863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.811 [2024-12-11 10:06:41.319949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.811 [2024-12-11 10:06:41.319962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.811 [2024-12-11 10:06:41.319969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.811 [2024-12-11 10:06:41.319975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.811 [2024-12-11 10:06:41.319992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.811 qpair failed and we were unable to recover it. 00:28:31.811 [2024-12-11 10:06:41.329879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.811 [2024-12-11 10:06:41.329933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.811 [2024-12-11 10:06:41.329946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.811 [2024-12-11 10:06:41.329953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.811 [2024-12-11 10:06:41.329959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.811 [2024-12-11 10:06:41.329972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.811 qpair failed and we were unable to recover it. 00:28:31.811 [2024-12-11 10:06:41.339856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.811 [2024-12-11 10:06:41.339911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.811 [2024-12-11 10:06:41.339926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.811 [2024-12-11 10:06:41.339934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.811 [2024-12-11 10:06:41.339940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.811 [2024-12-11 10:06:41.339955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.811 qpair failed and we were unable to recover it. 00:28:31.811 [2024-12-11 10:06:41.349884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.811 [2024-12-11 10:06:41.349937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.811 [2024-12-11 10:06:41.349950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.811 [2024-12-11 10:06:41.349956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.811 [2024-12-11 10:06:41.349962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.811 [2024-12-11 10:06:41.349976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.811 qpair failed and we were unable to recover it. 00:28:31.811 [2024-12-11 10:06:41.359901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.811 [2024-12-11 10:06:41.359957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.811 [2024-12-11 10:06:41.359970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.811 [2024-12-11 10:06:41.359980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.811 [2024-12-11 10:06:41.359986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.811 [2024-12-11 10:06:41.360001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.811 qpair failed and we were unable to recover it. 00:28:31.811 [2024-12-11 10:06:41.369916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.811 [2024-12-11 10:06:41.369976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.811 [2024-12-11 10:06:41.369990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.811 [2024-12-11 10:06:41.369997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.811 [2024-12-11 10:06:41.370003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.811 [2024-12-11 10:06:41.370018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.811 qpair failed and we were unable to recover it. 00:28:31.811 [2024-12-11 10:06:41.379998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.811 [2024-12-11 10:06:41.380054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.811 [2024-12-11 10:06:41.380071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.811 [2024-12-11 10:06:41.380078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.811 [2024-12-11 10:06:41.380085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:31.811 [2024-12-11 10:06:41.380101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.811 qpair failed and we were unable to recover it. 00:28:32.071 [2024-12-11 10:06:41.389984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.071 [2024-12-11 10:06:41.390058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.071 [2024-12-11 10:06:41.390075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.071 [2024-12-11 10:06:41.390083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.071 [2024-12-11 10:06:41.390089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.071 [2024-12-11 10:06:41.390104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.071 qpair failed and we were unable to recover it. 00:28:32.071 [2024-12-11 10:06:41.400022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.071 [2024-12-11 10:06:41.400078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.071 [2024-12-11 10:06:41.400093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.071 [2024-12-11 10:06:41.400100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.071 [2024-12-11 10:06:41.400106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.071 [2024-12-11 10:06:41.400121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.071 qpair failed and we were unable to recover it. 00:28:32.071 [2024-12-11 10:06:41.409974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.071 [2024-12-11 10:06:41.410027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.071 [2024-12-11 10:06:41.410041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.071 [2024-12-11 10:06:41.410050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.071 [2024-12-11 10:06:41.410056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.071 [2024-12-11 10:06:41.410071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.071 qpair failed and we were unable to recover it. 00:28:32.071 [2024-12-11 10:06:41.419999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.071 [2024-12-11 10:06:41.420050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.071 [2024-12-11 10:06:41.420063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.071 [2024-12-11 10:06:41.420070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.071 [2024-12-11 10:06:41.420076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.071 [2024-12-11 10:06:41.420090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.071 qpair failed and we were unable to recover it. 00:28:32.071 [2024-12-11 10:06:41.430027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.071 [2024-12-11 10:06:41.430086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.072 [2024-12-11 10:06:41.430100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.072 [2024-12-11 10:06:41.430106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.072 [2024-12-11 10:06:41.430112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.072 [2024-12-11 10:06:41.430126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.072 qpair failed and we were unable to recover it. 00:28:32.072 [2024-12-11 10:06:41.440047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.072 [2024-12-11 10:06:41.440128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.072 [2024-12-11 10:06:41.440142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.072 [2024-12-11 10:06:41.440148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.072 [2024-12-11 10:06:41.440154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.072 [2024-12-11 10:06:41.440168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.072 qpair failed and we were unable to recover it. 00:28:32.072 [2024-12-11 10:06:41.450083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.072 [2024-12-11 10:06:41.450141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.072 [2024-12-11 10:06:41.450154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.072 [2024-12-11 10:06:41.450161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.072 [2024-12-11 10:06:41.450167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.072 [2024-12-11 10:06:41.450187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.072 qpair failed and we were unable to recover it. 00:28:32.072 [2024-12-11 10:06:41.460151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.072 [2024-12-11 10:06:41.460229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.072 [2024-12-11 10:06:41.460243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.072 [2024-12-11 10:06:41.460250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.072 [2024-12-11 10:06:41.460256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.072 [2024-12-11 10:06:41.460270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.072 qpair failed and we were unable to recover it. 00:28:32.072 [2024-12-11 10:06:41.470131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.072 [2024-12-11 10:06:41.470198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.072 [2024-12-11 10:06:41.470211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.072 [2024-12-11 10:06:41.470222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.072 [2024-12-11 10:06:41.470228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.072 [2024-12-11 10:06:41.470242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.072 qpair failed and we were unable to recover it. 00:28:32.072 [2024-12-11 10:06:41.480226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.072 [2024-12-11 10:06:41.480278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.072 [2024-12-11 10:06:41.480291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.072 [2024-12-11 10:06:41.480298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.072 [2024-12-11 10:06:41.480305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.072 [2024-12-11 10:06:41.480319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.072 qpair failed and we were unable to recover it. 00:28:32.072 [2024-12-11 10:06:41.490287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.072 [2024-12-11 10:06:41.490354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.072 [2024-12-11 10:06:41.490366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.072 [2024-12-11 10:06:41.490373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.072 [2024-12-11 10:06:41.490378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.072 [2024-12-11 10:06:41.490392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.072 qpair failed and we were unable to recover it. 00:28:32.072 [2024-12-11 10:06:41.500247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.072 [2024-12-11 10:06:41.500301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.072 [2024-12-11 10:06:41.500314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.072 [2024-12-11 10:06:41.500321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.072 [2024-12-11 10:06:41.500327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.072 [2024-12-11 10:06:41.500341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.072 qpair failed and we were unable to recover it. 00:28:32.072 [2024-12-11 10:06:41.510272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.072 [2024-12-11 10:06:41.510326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.072 [2024-12-11 10:06:41.510339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.072 [2024-12-11 10:06:41.510345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.072 [2024-12-11 10:06:41.510351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.072 [2024-12-11 10:06:41.510365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.072 qpair failed and we were unable to recover it. 00:28:32.072 [2024-12-11 10:06:41.520350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.072 [2024-12-11 10:06:41.520400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.072 [2024-12-11 10:06:41.520413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.072 [2024-12-11 10:06:41.520420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.072 [2024-12-11 10:06:41.520426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.072 [2024-12-11 10:06:41.520440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.072 qpair failed and we were unable to recover it. 00:28:32.072 [2024-12-11 10:06:41.530402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.072 [2024-12-11 10:06:41.530455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.072 [2024-12-11 10:06:41.530468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.072 [2024-12-11 10:06:41.530474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.072 [2024-12-11 10:06:41.530480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.072 [2024-12-11 10:06:41.530494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.072 qpair failed and we were unable to recover it. 00:28:32.072 [2024-12-11 10:06:41.540408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.072 [2024-12-11 10:06:41.540459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.072 [2024-12-11 10:06:41.540472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.072 [2024-12-11 10:06:41.540482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.072 [2024-12-11 10:06:41.540488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.072 [2024-12-11 10:06:41.540501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.072 qpair failed and we were unable to recover it. 00:28:32.072 [2024-12-11 10:06:41.550456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.072 [2024-12-11 10:06:41.550505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.072 [2024-12-11 10:06:41.550518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.072 [2024-12-11 10:06:41.550525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.072 [2024-12-11 10:06:41.550530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.072 [2024-12-11 10:06:41.550544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.072 qpair failed and we were unable to recover it. 00:28:32.073 [2024-12-11 10:06:41.560480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.073 [2024-12-11 10:06:41.560559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.073 [2024-12-11 10:06:41.560573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.073 [2024-12-11 10:06:41.560580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.073 [2024-12-11 10:06:41.560586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.073 [2024-12-11 10:06:41.560599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.073 qpair failed and we were unable to recover it. 00:28:32.073 [2024-12-11 10:06:41.570499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.073 [2024-12-11 10:06:41.570554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.073 [2024-12-11 10:06:41.570566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.073 [2024-12-11 10:06:41.570573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.073 [2024-12-11 10:06:41.570579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.073 [2024-12-11 10:06:41.570593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.073 qpair failed and we were unable to recover it. 00:28:32.073 [2024-12-11 10:06:41.580552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.073 [2024-12-11 10:06:41.580606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.073 [2024-12-11 10:06:41.580619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.073 [2024-12-11 10:06:41.580626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.073 [2024-12-11 10:06:41.580632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.073 [2024-12-11 10:06:41.580649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.073 qpair failed and we were unable to recover it. 00:28:32.073 [2024-12-11 10:06:41.590608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.073 [2024-12-11 10:06:41.590663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.073 [2024-12-11 10:06:41.590677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.073 [2024-12-11 10:06:41.590683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.073 [2024-12-11 10:06:41.590689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.073 [2024-12-11 10:06:41.590703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.073 qpair failed and we were unable to recover it. 00:28:32.073 [2024-12-11 10:06:41.600642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.073 [2024-12-11 10:06:41.600707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.073 [2024-12-11 10:06:41.600720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.073 [2024-12-11 10:06:41.600727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.073 [2024-12-11 10:06:41.600732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.073 [2024-12-11 10:06:41.600746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.073 qpair failed and we were unable to recover it. 00:28:32.073 [2024-12-11 10:06:41.610647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.073 [2024-12-11 10:06:41.610700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.073 [2024-12-11 10:06:41.610713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.073 [2024-12-11 10:06:41.610720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.073 [2024-12-11 10:06:41.610725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.073 [2024-12-11 10:06:41.610739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.073 qpair failed and we were unable to recover it. 00:28:32.073 [2024-12-11 10:06:41.620649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.073 [2024-12-11 10:06:41.620724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.073 [2024-12-11 10:06:41.620738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.073 [2024-12-11 10:06:41.620744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.073 [2024-12-11 10:06:41.620750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.073 [2024-12-11 10:06:41.620763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.073 qpair failed and we were unable to recover it. 00:28:32.073 [2024-12-11 10:06:41.630616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.073 [2024-12-11 10:06:41.630672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.073 [2024-12-11 10:06:41.630686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.073 [2024-12-11 10:06:41.630692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.073 [2024-12-11 10:06:41.630699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.073 [2024-12-11 10:06:41.630712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.073 qpair failed and we were unable to recover it. 00:28:32.073 [2024-12-11 10:06:41.640741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.073 [2024-12-11 10:06:41.640793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.073 [2024-12-11 10:06:41.640808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.073 [2024-12-11 10:06:41.640815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.073 [2024-12-11 10:06:41.640821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.073 [2024-12-11 10:06:41.640836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.073 qpair failed and we were unable to recover it. 00:28:32.333 [2024-12-11 10:06:41.650699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.333 [2024-12-11 10:06:41.650753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.333 [2024-12-11 10:06:41.650769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.333 [2024-12-11 10:06:41.650777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.333 [2024-12-11 10:06:41.650782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.333 [2024-12-11 10:06:41.650798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.333 qpair failed and we were unable to recover it. 00:28:32.333 [2024-12-11 10:06:41.660738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.333 [2024-12-11 10:06:41.660789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.333 [2024-12-11 10:06:41.660803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.333 [2024-12-11 10:06:41.660810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.333 [2024-12-11 10:06:41.660817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.333 [2024-12-11 10:06:41.660831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.333 qpair failed and we were unable to recover it. 00:28:32.333 [2024-12-11 10:06:41.670832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.333 [2024-12-11 10:06:41.670924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.333 [2024-12-11 10:06:41.670938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.333 [2024-12-11 10:06:41.670947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.334 [2024-12-11 10:06:41.670953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.334 [2024-12-11 10:06:41.670969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.334 qpair failed and we were unable to recover it. 00:28:32.334 [2024-12-11 10:06:41.680832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.334 [2024-12-11 10:06:41.680882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.334 [2024-12-11 10:06:41.680895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.334 [2024-12-11 10:06:41.680902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.334 [2024-12-11 10:06:41.680908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.334 [2024-12-11 10:06:41.680921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.334 qpair failed and we were unable to recover it. 00:28:32.334 [2024-12-11 10:06:41.690866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.334 [2024-12-11 10:06:41.690945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.334 [2024-12-11 10:06:41.690958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.334 [2024-12-11 10:06:41.690964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.334 [2024-12-11 10:06:41.690970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.334 [2024-12-11 10:06:41.690984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.334 qpair failed and we were unable to recover it. 00:28:32.334 [2024-12-11 10:06:41.700909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.334 [2024-12-11 10:06:41.700980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.334 [2024-12-11 10:06:41.700993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.334 [2024-12-11 10:06:41.700999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.334 [2024-12-11 10:06:41.701005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.334 [2024-12-11 10:06:41.701019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.334 qpair failed and we were unable to recover it. 00:28:32.334 [2024-12-11 10:06:41.710942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.334 [2024-12-11 10:06:41.711008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.334 [2024-12-11 10:06:41.711021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.334 [2024-12-11 10:06:41.711028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.334 [2024-12-11 10:06:41.711034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.334 [2024-12-11 10:06:41.711051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.334 qpair failed and we were unable to recover it. 00:28:32.334 [2024-12-11 10:06:41.720972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.334 [2024-12-11 10:06:41.721025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.334 [2024-12-11 10:06:41.721039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.334 [2024-12-11 10:06:41.721046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.334 [2024-12-11 10:06:41.721052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.334 [2024-12-11 10:06:41.721065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.334 qpair failed and we were unable to recover it. 00:28:32.334 [2024-12-11 10:06:41.730976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.334 [2024-12-11 10:06:41.731029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.334 [2024-12-11 10:06:41.731042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.334 [2024-12-11 10:06:41.731049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.334 [2024-12-11 10:06:41.731055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.334 [2024-12-11 10:06:41.731070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.334 qpair failed and we were unable to recover it. 00:28:32.334 [2024-12-11 10:06:41.741008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.334 [2024-12-11 10:06:41.741063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.334 [2024-12-11 10:06:41.741077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.334 [2024-12-11 10:06:41.741083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.334 [2024-12-11 10:06:41.741089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.334 [2024-12-11 10:06:41.741103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.334 qpair failed and we were unable to recover it. 00:28:32.334 [2024-12-11 10:06:41.751019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.334 [2024-12-11 10:06:41.751074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.334 [2024-12-11 10:06:41.751088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.334 [2024-12-11 10:06:41.751095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.334 [2024-12-11 10:06:41.751101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.334 [2024-12-11 10:06:41.751114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.334 qpair failed and we were unable to recover it. 00:28:32.334 [2024-12-11 10:06:41.761042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.334 [2024-12-11 10:06:41.761143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.334 [2024-12-11 10:06:41.761158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.334 [2024-12-11 10:06:41.761166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.334 [2024-12-11 10:06:41.761172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.334 [2024-12-11 10:06:41.761186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.334 qpair failed and we were unable to recover it. 00:28:32.334 [2024-12-11 10:06:41.771092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.334 [2024-12-11 10:06:41.771149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.334 [2024-12-11 10:06:41.771162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.334 [2024-12-11 10:06:41.771169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.334 [2024-12-11 10:06:41.771175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.334 [2024-12-11 10:06:41.771188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.334 qpair failed and we were unable to recover it. 00:28:32.334 [2024-12-11 10:06:41.781132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.334 [2024-12-11 10:06:41.781238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.334 [2024-12-11 10:06:41.781251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.334 [2024-12-11 10:06:41.781258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.334 [2024-12-11 10:06:41.781264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.334 [2024-12-11 10:06:41.781277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.334 qpair failed and we were unable to recover it. 00:28:32.334 [2024-12-11 10:06:41.791200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.334 [2024-12-11 10:06:41.791262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.334 [2024-12-11 10:06:41.791275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.334 [2024-12-11 10:06:41.791281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.334 [2024-12-11 10:06:41.791287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.334 [2024-12-11 10:06:41.791301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.334 qpair failed and we were unable to recover it. 00:28:32.334 [2024-12-11 10:06:41.801181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.334 [2024-12-11 10:06:41.801255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.334 [2024-12-11 10:06:41.801268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.334 [2024-12-11 10:06:41.801277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.334 [2024-12-11 10:06:41.801283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.334 [2024-12-11 10:06:41.801298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.335 qpair failed and we were unable to recover it. 00:28:32.335 [2024-12-11 10:06:41.811223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.335 [2024-12-11 10:06:41.811307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.335 [2024-12-11 10:06:41.811320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.335 [2024-12-11 10:06:41.811326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.335 [2024-12-11 10:06:41.811332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.335 [2024-12-11 10:06:41.811346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.335 qpair failed and we were unable to recover it. 00:28:32.335 [2024-12-11 10:06:41.821257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.335 [2024-12-11 10:06:41.821309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.335 [2024-12-11 10:06:41.821322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.335 [2024-12-11 10:06:41.821328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.335 [2024-12-11 10:06:41.821334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.335 [2024-12-11 10:06:41.821349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.335 qpair failed and we were unable to recover it. 00:28:32.335 [2024-12-11 10:06:41.831257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.335 [2024-12-11 10:06:41.831310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.335 [2024-12-11 10:06:41.831322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.335 [2024-12-11 10:06:41.831328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.335 [2024-12-11 10:06:41.831334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.335 [2024-12-11 10:06:41.831348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.335 qpair failed and we were unable to recover it. 00:28:32.335 [2024-12-11 10:06:41.841301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.335 [2024-12-11 10:06:41.841352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.335 [2024-12-11 10:06:41.841366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.335 [2024-12-11 10:06:41.841373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.335 [2024-12-11 10:06:41.841378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.335 [2024-12-11 10:06:41.841395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.335 qpair failed and we were unable to recover it. 00:28:32.335 [2024-12-11 10:06:41.851327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.335 [2024-12-11 10:06:41.851382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.335 [2024-12-11 10:06:41.851395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.335 [2024-12-11 10:06:41.851402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.335 [2024-12-11 10:06:41.851408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.335 [2024-12-11 10:06:41.851421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.335 qpair failed and we were unable to recover it. 00:28:32.335 [2024-12-11 10:06:41.861349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.335 [2024-12-11 10:06:41.861407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.335 [2024-12-11 10:06:41.861420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.335 [2024-12-11 10:06:41.861426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.335 [2024-12-11 10:06:41.861432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.335 [2024-12-11 10:06:41.861447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.335 qpair failed and we were unable to recover it. 00:28:32.335 [2024-12-11 10:06:41.871384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.335 [2024-12-11 10:06:41.871432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.335 [2024-12-11 10:06:41.871445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.335 [2024-12-11 10:06:41.871452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.335 [2024-12-11 10:06:41.871458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.335 [2024-12-11 10:06:41.871472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.335 qpair failed and we were unable to recover it. 00:28:32.335 [2024-12-11 10:06:41.881416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.335 [2024-12-11 10:06:41.881485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.335 [2024-12-11 10:06:41.881499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.335 [2024-12-11 10:06:41.881505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.335 [2024-12-11 10:06:41.881511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.335 [2024-12-11 10:06:41.881524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.335 qpair failed and we were unable to recover it. 00:28:32.335 [2024-12-11 10:06:41.891456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.335 [2024-12-11 10:06:41.891513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.335 [2024-12-11 10:06:41.891526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.335 [2024-12-11 10:06:41.891532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.335 [2024-12-11 10:06:41.891538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.335 [2024-12-11 10:06:41.891552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.335 qpair failed and we were unable to recover it. 00:28:32.335 [2024-12-11 10:06:41.901482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.335 [2024-12-11 10:06:41.901537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.335 [2024-12-11 10:06:41.901549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.335 [2024-12-11 10:06:41.901556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.335 [2024-12-11 10:06:41.901562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.335 [2024-12-11 10:06:41.901575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.335 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-11 10:06:41.911502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.595 [2024-12-11 10:06:41.911558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.595 [2024-12-11 10:06:41.911574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.595 [2024-12-11 10:06:41.911582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.595 [2024-12-11 10:06:41.911587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.595 [2024-12-11 10:06:41.911603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-11 10:06:41.921520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.595 [2024-12-11 10:06:41.921579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.595 [2024-12-11 10:06:41.921595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.595 [2024-12-11 10:06:41.921602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.595 [2024-12-11 10:06:41.921608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.595 [2024-12-11 10:06:41.921623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-11 10:06:41.931564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.595 [2024-12-11 10:06:41.931619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.595 [2024-12-11 10:06:41.931632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.595 [2024-12-11 10:06:41.931642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.595 [2024-12-11 10:06:41.931647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.595 [2024-12-11 10:06:41.931662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-11 10:06:41.941581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.596 [2024-12-11 10:06:41.941637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.596 [2024-12-11 10:06:41.941651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.596 [2024-12-11 10:06:41.941658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.596 [2024-12-11 10:06:41.941664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.596 [2024-12-11 10:06:41.941677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-11 10:06:41.951626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.596 [2024-12-11 10:06:41.951678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.596 [2024-12-11 10:06:41.951691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.596 [2024-12-11 10:06:41.951697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.596 [2024-12-11 10:06:41.951703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.596 [2024-12-11 10:06:41.951716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-11 10:06:41.961652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.596 [2024-12-11 10:06:41.961704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.596 [2024-12-11 10:06:41.961717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.596 [2024-12-11 10:06:41.961723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.596 [2024-12-11 10:06:41.961729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.596 [2024-12-11 10:06:41.961743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-11 10:06:41.971657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.596 [2024-12-11 10:06:41.971731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.596 [2024-12-11 10:06:41.971745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.596 [2024-12-11 10:06:41.971751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.596 [2024-12-11 10:06:41.971757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.596 [2024-12-11 10:06:41.971774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-11 10:06:41.981742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.596 [2024-12-11 10:06:41.981804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.596 [2024-12-11 10:06:41.981816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.596 [2024-12-11 10:06:41.981823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.596 [2024-12-11 10:06:41.981829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.596 [2024-12-11 10:06:41.981843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-11 10:06:41.991756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.596 [2024-12-11 10:06:41.991809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.596 [2024-12-11 10:06:41.991822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.596 [2024-12-11 10:06:41.991828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.596 [2024-12-11 10:06:41.991834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.596 [2024-12-11 10:06:41.991847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-11 10:06:42.001753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.596 [2024-12-11 10:06:42.001805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.596 [2024-12-11 10:06:42.001818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.596 [2024-12-11 10:06:42.001824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.596 [2024-12-11 10:06:42.001830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.596 [2024-12-11 10:06:42.001843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-11 10:06:42.011744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.596 [2024-12-11 10:06:42.011795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.596 [2024-12-11 10:06:42.011808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.596 [2024-12-11 10:06:42.011815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.596 [2024-12-11 10:06:42.011822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.596 [2024-12-11 10:06:42.011835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-11 10:06:42.021820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.596 [2024-12-11 10:06:42.021873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.596 [2024-12-11 10:06:42.021886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.596 [2024-12-11 10:06:42.021892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.596 [2024-12-11 10:06:42.021898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.596 [2024-12-11 10:06:42.021912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-11 10:06:42.031845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.596 [2024-12-11 10:06:42.031896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.596 [2024-12-11 10:06:42.031909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.596 [2024-12-11 10:06:42.031915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.596 [2024-12-11 10:06:42.031921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.596 [2024-12-11 10:06:42.031935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-11 10:06:42.041810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.596 [2024-12-11 10:06:42.041858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.596 [2024-12-11 10:06:42.041871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.596 [2024-12-11 10:06:42.041877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.596 [2024-12-11 10:06:42.041884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.596 [2024-12-11 10:06:42.041897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-11 10:06:42.051938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.596 [2024-12-11 10:06:42.051991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.596 [2024-12-11 10:06:42.052004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.596 [2024-12-11 10:06:42.052011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.596 [2024-12-11 10:06:42.052017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.596 [2024-12-11 10:06:42.052030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-11 10:06:42.061962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.596 [2024-12-11 10:06:42.062013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.596 [2024-12-11 10:06:42.062027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.596 [2024-12-11 10:06:42.062038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.596 [2024-12-11 10:06:42.062044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.596 [2024-12-11 10:06:42.062057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-11 10:06:42.071960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.596 [2024-12-11 10:06:42.072015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.596 [2024-12-11 10:06:42.072028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.596 [2024-12-11 10:06:42.072035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.597 [2024-12-11 10:06:42.072041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.597 [2024-12-11 10:06:42.072055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-11 10:06:42.081988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.597 [2024-12-11 10:06:42.082041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.597 [2024-12-11 10:06:42.082054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.597 [2024-12-11 10:06:42.082061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.597 [2024-12-11 10:06:42.082067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.597 [2024-12-11 10:06:42.082081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-11 10:06:42.091946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.597 [2024-12-11 10:06:42.091999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.597 [2024-12-11 10:06:42.092012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.597 [2024-12-11 10:06:42.092019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.597 [2024-12-11 10:06:42.092025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.597 [2024-12-11 10:06:42.092039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-11 10:06:42.102047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.597 [2024-12-11 10:06:42.102102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.597 [2024-12-11 10:06:42.102115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.597 [2024-12-11 10:06:42.102122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.597 [2024-12-11 10:06:42.102127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.597 [2024-12-11 10:06:42.102144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-11 10:06:42.111996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.597 [2024-12-11 10:06:42.112085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.597 [2024-12-11 10:06:42.112098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.597 [2024-12-11 10:06:42.112105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.597 [2024-12-11 10:06:42.112111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.597 [2024-12-11 10:06:42.112124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-11 10:06:42.122139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.597 [2024-12-11 10:06:42.122207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.597 [2024-12-11 10:06:42.122224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.597 [2024-12-11 10:06:42.122231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.597 [2024-12-11 10:06:42.122237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.597 [2024-12-11 10:06:42.122251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-11 10:06:42.132147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.597 [2024-12-11 10:06:42.132202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.597 [2024-12-11 10:06:42.132214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.597 [2024-12-11 10:06:42.132225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.597 [2024-12-11 10:06:42.132231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.597 [2024-12-11 10:06:42.132245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-11 10:06:42.142171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.597 [2024-12-11 10:06:42.142229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.597 [2024-12-11 10:06:42.142244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.597 [2024-12-11 10:06:42.142251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.597 [2024-12-11 10:06:42.142257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.597 [2024-12-11 10:06:42.142272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-11 10:06:42.152182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.597 [2024-12-11 10:06:42.152239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.597 [2024-12-11 10:06:42.152253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.597 [2024-12-11 10:06:42.152260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.597 [2024-12-11 10:06:42.152265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.597 [2024-12-11 10:06:42.152279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-11 10:06:42.162212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.597 [2024-12-11 10:06:42.162284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.597 [2024-12-11 10:06:42.162297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.597 [2024-12-11 10:06:42.162305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.597 [2024-12-11 10:06:42.162310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.597 [2024-12-11 10:06:42.162325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.857 [2024-12-11 10:06:42.172256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.857 [2024-12-11 10:06:42.172316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.857 [2024-12-11 10:06:42.172333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.857 [2024-12-11 10:06:42.172340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.857 [2024-12-11 10:06:42.172346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.857 [2024-12-11 10:06:42.172361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.857 qpair failed and we were unable to recover it. 00:28:32.857 [2024-12-11 10:06:42.182302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.857 [2024-12-11 10:06:42.182402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.857 [2024-12-11 10:06:42.182418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.857 [2024-12-11 10:06:42.182425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.857 [2024-12-11 10:06:42.182432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.857 [2024-12-11 10:06:42.182447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.857 qpair failed and we were unable to recover it. 00:28:32.857 [2024-12-11 10:06:42.192344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.857 [2024-12-11 10:06:42.192409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.857 [2024-12-11 10:06:42.192423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.857 [2024-12-11 10:06:42.192433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.857 [2024-12-11 10:06:42.192439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.857 [2024-12-11 10:06:42.192453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.857 qpair failed and we were unable to recover it. 00:28:32.857 [2024-12-11 10:06:42.202345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.857 [2024-12-11 10:06:42.202413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.857 [2024-12-11 10:06:42.202426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.857 [2024-12-11 10:06:42.202433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.857 [2024-12-11 10:06:42.202439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.857 [2024-12-11 10:06:42.202452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.857 qpair failed and we were unable to recover it. 00:28:32.857 [2024-12-11 10:06:42.212379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.857 [2024-12-11 10:06:42.212429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.857 [2024-12-11 10:06:42.212442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.857 [2024-12-11 10:06:42.212449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.857 [2024-12-11 10:06:42.212454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.857 [2024-12-11 10:06:42.212468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.857 qpair failed and we were unable to recover it. 00:28:32.857 [2024-12-11 10:06:42.222394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.857 [2024-12-11 10:06:42.222452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.857 [2024-12-11 10:06:42.222464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.858 [2024-12-11 10:06:42.222471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.858 [2024-12-11 10:06:42.222477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.858 [2024-12-11 10:06:42.222490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.858 qpair failed and we were unable to recover it. 00:28:32.858 [2024-12-11 10:06:42.232398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.858 [2024-12-11 10:06:42.232464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.858 [2024-12-11 10:06:42.232477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.858 [2024-12-11 10:06:42.232484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.858 [2024-12-11 10:06:42.232489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.858 [2024-12-11 10:06:42.232506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.858 qpair failed and we were unable to recover it. 00:28:32.858 [2024-12-11 10:06:42.242454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.858 [2024-12-11 10:06:42.242509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.858 [2024-12-11 10:06:42.242522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.858 [2024-12-11 10:06:42.242529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.858 [2024-12-11 10:06:42.242534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.858 [2024-12-11 10:06:42.242549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.858 qpair failed and we were unable to recover it. 00:28:32.858 [2024-12-11 10:06:42.252486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.858 [2024-12-11 10:06:42.252554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.858 [2024-12-11 10:06:42.252567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.858 [2024-12-11 10:06:42.252574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.858 [2024-12-11 10:06:42.252579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.858 [2024-12-11 10:06:42.252592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.858 qpair failed and we were unable to recover it. 00:28:32.858 [2024-12-11 10:06:42.262512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.858 [2024-12-11 10:06:42.262565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.858 [2024-12-11 10:06:42.262578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.858 [2024-12-11 10:06:42.262585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.858 [2024-12-11 10:06:42.262591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.858 [2024-12-11 10:06:42.262604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.858 qpair failed and we were unable to recover it. 00:28:32.858 [2024-12-11 10:06:42.272541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.858 [2024-12-11 10:06:42.272596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.858 [2024-12-11 10:06:42.272609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.858 [2024-12-11 10:06:42.272616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.858 [2024-12-11 10:06:42.272622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.858 [2024-12-11 10:06:42.272635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.858 qpair failed and we were unable to recover it. 00:28:32.858 [2024-12-11 10:06:42.282564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.858 [2024-12-11 10:06:42.282620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.858 [2024-12-11 10:06:42.282634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.858 [2024-12-11 10:06:42.282640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.858 [2024-12-11 10:06:42.282646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.858 [2024-12-11 10:06:42.282660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.858 qpair failed and we were unable to recover it. 00:28:32.858 [2024-12-11 10:06:42.292591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.858 [2024-12-11 10:06:42.292642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.858 [2024-12-11 10:06:42.292655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.858 [2024-12-11 10:06:42.292662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.858 [2024-12-11 10:06:42.292667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.858 [2024-12-11 10:06:42.292681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.858 qpair failed and we were unable to recover it. 00:28:32.858 [2024-12-11 10:06:42.302614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.858 [2024-12-11 10:06:42.302670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.858 [2024-12-11 10:06:42.302682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.858 [2024-12-11 10:06:42.302689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.858 [2024-12-11 10:06:42.302695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.858 [2024-12-11 10:06:42.302708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.858 qpair failed and we were unable to recover it. 00:28:32.858 [2024-12-11 10:06:42.312634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.858 [2024-12-11 10:06:42.312689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.858 [2024-12-11 10:06:42.312702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.858 [2024-12-11 10:06:42.312708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.858 [2024-12-11 10:06:42.312714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.858 [2024-12-11 10:06:42.312728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.858 qpair failed and we were unable to recover it. 00:28:32.858 [2024-12-11 10:06:42.322588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.858 [2024-12-11 10:06:42.322638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.858 [2024-12-11 10:06:42.322651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.858 [2024-12-11 10:06:42.322661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.858 [2024-12-11 10:06:42.322667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.858 [2024-12-11 10:06:42.322680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.858 qpair failed and we were unable to recover it. 00:28:32.858 [2024-12-11 10:06:42.332698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.858 [2024-12-11 10:06:42.332754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.858 [2024-12-11 10:06:42.332767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.858 [2024-12-11 10:06:42.332773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.858 [2024-12-11 10:06:42.332779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.858 [2024-12-11 10:06:42.332792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.858 qpair failed and we were unable to recover it. 00:28:32.858 [2024-12-11 10:06:42.342720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.858 [2024-12-11 10:06:42.342805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.858 [2024-12-11 10:06:42.342821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.858 [2024-12-11 10:06:42.342828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.858 [2024-12-11 10:06:42.342834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.858 [2024-12-11 10:06:42.342849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.859 qpair failed and we were unable to recover it. 00:28:32.859 [2024-12-11 10:06:42.352740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.859 [2024-12-11 10:06:42.352795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.859 [2024-12-11 10:06:42.352808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.859 [2024-12-11 10:06:42.352815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.859 [2024-12-11 10:06:42.352821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.859 [2024-12-11 10:06:42.352836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.859 qpair failed and we were unable to recover it. 00:28:32.859 [2024-12-11 10:06:42.362828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.859 [2024-12-11 10:06:42.362880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.859 [2024-12-11 10:06:42.362893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.859 [2024-12-11 10:06:42.362900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.859 [2024-12-11 10:06:42.362906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.859 [2024-12-11 10:06:42.362924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.859 qpair failed and we were unable to recover it. 00:28:32.859 [2024-12-11 10:06:42.372809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.859 [2024-12-11 10:06:42.372866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.859 [2024-12-11 10:06:42.372879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.859 [2024-12-11 10:06:42.372886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.859 [2024-12-11 10:06:42.372892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.859 [2024-12-11 10:06:42.372906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.859 qpair failed and we were unable to recover it. 00:28:32.859 [2024-12-11 10:06:42.382845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.859 [2024-12-11 10:06:42.382903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.859 [2024-12-11 10:06:42.382916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.859 [2024-12-11 10:06:42.382923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.859 [2024-12-11 10:06:42.382929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.859 [2024-12-11 10:06:42.382943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.859 qpair failed and we were unable to recover it. 00:28:32.859 [2024-12-11 10:06:42.392872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.859 [2024-12-11 10:06:42.392922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.859 [2024-12-11 10:06:42.392935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.859 [2024-12-11 10:06:42.392941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.859 [2024-12-11 10:06:42.392947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.859 [2024-12-11 10:06:42.392961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.859 qpair failed and we were unable to recover it. 00:28:32.859 [2024-12-11 10:06:42.402886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.859 [2024-12-11 10:06:42.402942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.859 [2024-12-11 10:06:42.402956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.859 [2024-12-11 10:06:42.402963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.859 [2024-12-11 10:06:42.402969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.859 [2024-12-11 10:06:42.402982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.859 qpair failed and we were unable to recover it. 00:28:32.859 [2024-12-11 10:06:42.412922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.859 [2024-12-11 10:06:42.412979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.859 [2024-12-11 10:06:42.412992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.859 [2024-12-11 10:06:42.412999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.859 [2024-12-11 10:06:42.413005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.859 [2024-12-11 10:06:42.413019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.859 qpair failed and we were unable to recover it. 00:28:32.859 [2024-12-11 10:06:42.422964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.859 [2024-12-11 10:06:42.423018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.859 [2024-12-11 10:06:42.423032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.859 [2024-12-11 10:06:42.423038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.859 [2024-12-11 10:06:42.423044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:32.859 [2024-12-11 10:06:42.423059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.859 qpair failed and we were unable to recover it. 00:28:33.119 [2024-12-11 10:06:42.433074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.119 [2024-12-11 10:06:42.433154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.119 [2024-12-11 10:06:42.433171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.119 [2024-12-11 10:06:42.433178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.119 [2024-12-11 10:06:42.433184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.119 [2024-12-11 10:06:42.433200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.119 qpair failed and we were unable to recover it. 00:28:33.119 [2024-12-11 10:06:42.443047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.119 [2024-12-11 10:06:42.443102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.119 [2024-12-11 10:06:42.443118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.119 [2024-12-11 10:06:42.443126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.119 [2024-12-11 10:06:42.443132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.119 [2024-12-11 10:06:42.443147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.119 qpair failed and we were unable to recover it. 00:28:33.119 [2024-12-11 10:06:42.453023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.119 [2024-12-11 10:06:42.453099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.119 [2024-12-11 10:06:42.453114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.119 [2024-12-11 10:06:42.453126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.119 [2024-12-11 10:06:42.453132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.119 [2024-12-11 10:06:42.453148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.119 qpair failed and we were unable to recover it. 00:28:33.119 [2024-12-11 10:06:42.463160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.119 [2024-12-11 10:06:42.463215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.119 [2024-12-11 10:06:42.463235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.119 [2024-12-11 10:06:42.463242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.119 [2024-12-11 10:06:42.463248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.120 [2024-12-11 10:06:42.463264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.120 qpair failed and we were unable to recover it. 00:28:33.120 [2024-12-11 10:06:42.473083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.120 [2024-12-11 10:06:42.473134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.120 [2024-12-11 10:06:42.473148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.120 [2024-12-11 10:06:42.473154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.120 [2024-12-11 10:06:42.473160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.120 [2024-12-11 10:06:42.473174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.120 qpair failed and we were unable to recover it. 00:28:33.120 [2024-12-11 10:06:42.483160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.120 [2024-12-11 10:06:42.483225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.120 [2024-12-11 10:06:42.483239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.120 [2024-12-11 10:06:42.483246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.120 [2024-12-11 10:06:42.483252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.120 [2024-12-11 10:06:42.483267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.120 qpair failed and we were unable to recover it. 00:28:33.120 [2024-12-11 10:06:42.493188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.120 [2024-12-11 10:06:42.493249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.120 [2024-12-11 10:06:42.493262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.120 [2024-12-11 10:06:42.493268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.120 [2024-12-11 10:06:42.493274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.120 [2024-12-11 10:06:42.493291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.120 qpair failed and we were unable to recover it. 00:28:33.120 [2024-12-11 10:06:42.503190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.120 [2024-12-11 10:06:42.503245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.120 [2024-12-11 10:06:42.503258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.120 [2024-12-11 10:06:42.503265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.120 [2024-12-11 10:06:42.503271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.120 [2024-12-11 10:06:42.503285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.120 qpair failed and we were unable to recover it. 00:28:33.120 [2024-12-11 10:06:42.513172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.120 [2024-12-11 10:06:42.513267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.120 [2024-12-11 10:06:42.513279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.120 [2024-12-11 10:06:42.513286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.120 [2024-12-11 10:06:42.513292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.120 [2024-12-11 10:06:42.513306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.120 qpair failed and we were unable to recover it. 00:28:33.120 [2024-12-11 10:06:42.523229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.120 [2024-12-11 10:06:42.523282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.120 [2024-12-11 10:06:42.523295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.120 [2024-12-11 10:06:42.523302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.120 [2024-12-11 10:06:42.523308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.120 [2024-12-11 10:06:42.523322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.120 qpair failed and we were unable to recover it. 00:28:33.120 [2024-12-11 10:06:42.533285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.120 [2024-12-11 10:06:42.533340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.120 [2024-12-11 10:06:42.533353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.120 [2024-12-11 10:06:42.533360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.120 [2024-12-11 10:06:42.533366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.120 [2024-12-11 10:06:42.533380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.120 qpair failed and we were unable to recover it. 00:28:33.120 [2024-12-11 10:06:42.543302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.120 [2024-12-11 10:06:42.543408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.120 [2024-12-11 10:06:42.543421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.120 [2024-12-11 10:06:42.543429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.120 [2024-12-11 10:06:42.543435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.120 [2024-12-11 10:06:42.543449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.120 qpair failed and we were unable to recover it. 00:28:33.120 [2024-12-11 10:06:42.553363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.120 [2024-12-11 10:06:42.553415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.120 [2024-12-11 10:06:42.553428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.120 [2024-12-11 10:06:42.553434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.120 [2024-12-11 10:06:42.553440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.120 [2024-12-11 10:06:42.553454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.120 qpair failed and we were unable to recover it. 00:28:33.120 [2024-12-11 10:06:42.563347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.120 [2024-12-11 10:06:42.563411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.120 [2024-12-11 10:06:42.563425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.120 [2024-12-11 10:06:42.563431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.120 [2024-12-11 10:06:42.563437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.120 [2024-12-11 10:06:42.563450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.120 qpair failed and we were unable to recover it. 00:28:33.120 [2024-12-11 10:06:42.573373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.120 [2024-12-11 10:06:42.573473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.120 [2024-12-11 10:06:42.573486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.120 [2024-12-11 10:06:42.573493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.120 [2024-12-11 10:06:42.573499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.120 [2024-12-11 10:06:42.573513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.120 qpair failed and we were unable to recover it. 00:28:33.120 [2024-12-11 10:06:42.583400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.120 [2024-12-11 10:06:42.583455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.120 [2024-12-11 10:06:42.583468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.120 [2024-12-11 10:06:42.583478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.120 [2024-12-11 10:06:42.583484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.120 [2024-12-11 10:06:42.583498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.120 qpair failed and we were unable to recover it. 00:28:33.120 [2024-12-11 10:06:42.593473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.120 [2024-12-11 10:06:42.593524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.120 [2024-12-11 10:06:42.593537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.120 [2024-12-11 10:06:42.593544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.120 [2024-12-11 10:06:42.593550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.120 [2024-12-11 10:06:42.593564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.120 qpair failed and we were unable to recover it. 00:28:33.120 [2024-12-11 10:06:42.603457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.120 [2024-12-11 10:06:42.603526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.121 [2024-12-11 10:06:42.603539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.121 [2024-12-11 10:06:42.603546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.121 [2024-12-11 10:06:42.603552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.121 [2024-12-11 10:06:42.603565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.121 qpair failed and we were unable to recover it. 00:28:33.121 [2024-12-11 10:06:42.613496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.121 [2024-12-11 10:06:42.613552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.121 [2024-12-11 10:06:42.613565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.121 [2024-12-11 10:06:42.613572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.121 [2024-12-11 10:06:42.613577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.121 [2024-12-11 10:06:42.613591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.121 qpair failed and we were unable to recover it. 00:28:33.121 [2024-12-11 10:06:42.623523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.121 [2024-12-11 10:06:42.623579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.121 [2024-12-11 10:06:42.623592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.121 [2024-12-11 10:06:42.623599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.121 [2024-12-11 10:06:42.623604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.121 [2024-12-11 10:06:42.623621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.121 qpair failed and we were unable to recover it. 00:28:33.121 [2024-12-11 10:06:42.633545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.121 [2024-12-11 10:06:42.633592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.121 [2024-12-11 10:06:42.633605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.121 [2024-12-11 10:06:42.633612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.121 [2024-12-11 10:06:42.633617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.121 [2024-12-11 10:06:42.633631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.121 qpair failed and we were unable to recover it. 00:28:33.121 [2024-12-11 10:06:42.643580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.121 [2024-12-11 10:06:42.643631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.121 [2024-12-11 10:06:42.643644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.121 [2024-12-11 10:06:42.643651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.121 [2024-12-11 10:06:42.643657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.121 [2024-12-11 10:06:42.643670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.121 qpair failed and we were unable to recover it. 00:28:33.121 [2024-12-11 10:06:42.653691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.121 [2024-12-11 10:06:42.653771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.121 [2024-12-11 10:06:42.653784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.121 [2024-12-11 10:06:42.653790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.121 [2024-12-11 10:06:42.653796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.121 [2024-12-11 10:06:42.653809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.121 qpair failed and we were unable to recover it. 00:28:33.121 [2024-12-11 10:06:42.663617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.121 [2024-12-11 10:06:42.663687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.121 [2024-12-11 10:06:42.663701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.121 [2024-12-11 10:06:42.663707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.121 [2024-12-11 10:06:42.663713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.121 [2024-12-11 10:06:42.663727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.121 qpair failed and we were unable to recover it. 00:28:33.121 [2024-12-11 10:06:42.673658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.121 [2024-12-11 10:06:42.673716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.121 [2024-12-11 10:06:42.673729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.121 [2024-12-11 10:06:42.673735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.121 [2024-12-11 10:06:42.673741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.121 [2024-12-11 10:06:42.673755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.121 qpair failed and we were unable to recover it. 00:28:33.121 [2024-12-11 10:06:42.683681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.121 [2024-12-11 10:06:42.683766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.121 [2024-12-11 10:06:42.683779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.121 [2024-12-11 10:06:42.683785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.121 [2024-12-11 10:06:42.683791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.121 [2024-12-11 10:06:42.683805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.121 qpair failed and we were unable to recover it. 00:28:33.381 [2024-12-11 10:06:42.693808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.381 [2024-12-11 10:06:42.693891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.381 [2024-12-11 10:06:42.693908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.381 [2024-12-11 10:06:42.693915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.381 [2024-12-11 10:06:42.693921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.381 [2024-12-11 10:06:42.693936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.381 qpair failed and we were unable to recover it. 00:28:33.381 [2024-12-11 10:06:42.703755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.381 [2024-12-11 10:06:42.703806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.381 [2024-12-11 10:06:42.703821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.381 [2024-12-11 10:06:42.703828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.381 [2024-12-11 10:06:42.703834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.381 [2024-12-11 10:06:42.703850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.381 qpair failed and we were unable to recover it. 00:28:33.381 [2024-12-11 10:06:42.713820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.381 [2024-12-11 10:06:42.713873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.381 [2024-12-11 10:06:42.713886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.381 [2024-12-11 10:06:42.713897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.381 [2024-12-11 10:06:42.713902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.381 [2024-12-11 10:06:42.713917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.381 qpair failed and we were unable to recover it. 00:28:33.381 [2024-12-11 10:06:42.723807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.381 [2024-12-11 10:06:42.723859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.381 [2024-12-11 10:06:42.723873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.381 [2024-12-11 10:06:42.723879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.381 [2024-12-11 10:06:42.723885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.381 [2024-12-11 10:06:42.723899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.381 qpair failed and we were unable to recover it. 00:28:33.381 [2024-12-11 10:06:42.733856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.381 [2024-12-11 10:06:42.733909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.381 [2024-12-11 10:06:42.733923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.381 [2024-12-11 10:06:42.733930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.381 [2024-12-11 10:06:42.733935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.381 [2024-12-11 10:06:42.733949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.382 qpair failed and we were unable to recover it. 00:28:33.382 [2024-12-11 10:06:42.743917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.382 [2024-12-11 10:06:42.743981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.382 [2024-12-11 10:06:42.743994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.382 [2024-12-11 10:06:42.744001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.382 [2024-12-11 10:06:42.744007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.382 [2024-12-11 10:06:42.744020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.382 qpair failed and we were unable to recover it. 00:28:33.382 [2024-12-11 10:06:42.753893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.382 [2024-12-11 10:06:42.753947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.382 [2024-12-11 10:06:42.753960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.382 [2024-12-11 10:06:42.753967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.382 [2024-12-11 10:06:42.753973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.382 [2024-12-11 10:06:42.753991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.382 qpair failed and we were unable to recover it. 00:28:33.382 [2024-12-11 10:06:42.763981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.382 [2024-12-11 10:06:42.764039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.382 [2024-12-11 10:06:42.764052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.382 [2024-12-11 10:06:42.764059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.382 [2024-12-11 10:06:42.764065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.382 [2024-12-11 10:06:42.764079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.382 qpair failed and we were unable to recover it. 00:28:33.382 [2024-12-11 10:06:42.773952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.382 [2024-12-11 10:06:42.774007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.382 [2024-12-11 10:06:42.774020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.382 [2024-12-11 10:06:42.774027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.382 [2024-12-11 10:06:42.774033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.382 [2024-12-11 10:06:42.774047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.382 qpair failed and we were unable to recover it. 00:28:33.382 [2024-12-11 10:06:42.783988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.382 [2024-12-11 10:06:42.784042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.382 [2024-12-11 10:06:42.784056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.382 [2024-12-11 10:06:42.784063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.382 [2024-12-11 10:06:42.784069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.382 [2024-12-11 10:06:42.784083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.382 qpair failed and we were unable to recover it. 00:28:33.382 [2024-12-11 10:06:42.794018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.382 [2024-12-11 10:06:42.794074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.382 [2024-12-11 10:06:42.794088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.382 [2024-12-11 10:06:42.794095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.382 [2024-12-11 10:06:42.794101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.382 [2024-12-11 10:06:42.794115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.382 qpair failed and we were unable to recover it. 00:28:33.382 [2024-12-11 10:06:42.803998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.382 [2024-12-11 10:06:42.804055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.382 [2024-12-11 10:06:42.804068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.382 [2024-12-11 10:06:42.804076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.382 [2024-12-11 10:06:42.804082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.382 [2024-12-11 10:06:42.804095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.382 qpair failed and we were unable to recover it. 00:28:33.382 [2024-12-11 10:06:42.814081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.382 [2024-12-11 10:06:42.814166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.382 [2024-12-11 10:06:42.814179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.382 [2024-12-11 10:06:42.814186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.382 [2024-12-11 10:06:42.814192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.382 [2024-12-11 10:06:42.814205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.382 qpair failed and we were unable to recover it. 00:28:33.382 [2024-12-11 10:06:42.824054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.382 [2024-12-11 10:06:42.824141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.382 [2024-12-11 10:06:42.824155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.382 [2024-12-11 10:06:42.824161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.382 [2024-12-11 10:06:42.824167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.382 [2024-12-11 10:06:42.824180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.382 qpair failed and we were unable to recover it. 00:28:33.382 [2024-12-11 10:06:42.834077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.382 [2024-12-11 10:06:42.834154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.382 [2024-12-11 10:06:42.834167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.382 [2024-12-11 10:06:42.834174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.382 [2024-12-11 10:06:42.834179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.382 [2024-12-11 10:06:42.834193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.382 qpair failed and we were unable to recover it. 00:28:33.382 [2024-12-11 10:06:42.844134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.382 [2024-12-11 10:06:42.844228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.382 [2024-12-11 10:06:42.844242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.382 [2024-12-11 10:06:42.844251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.382 [2024-12-11 10:06:42.844257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.382 [2024-12-11 10:06:42.844271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.382 qpair failed and we were unable to recover it. 00:28:33.382 [2024-12-11 10:06:42.854207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.382 [2024-12-11 10:06:42.854268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.382 [2024-12-11 10:06:42.854281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.382 [2024-12-11 10:06:42.854288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.382 [2024-12-11 10:06:42.854294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.382 [2024-12-11 10:06:42.854308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.382 qpair failed and we were unable to recover it. 00:28:33.382 [2024-12-11 10:06:42.864156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.382 [2024-12-11 10:06:42.864261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.382 [2024-12-11 10:06:42.864274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.382 [2024-12-11 10:06:42.864281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.382 [2024-12-11 10:06:42.864287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.382 [2024-12-11 10:06:42.864301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.382 qpair failed and we were unable to recover it. 00:28:33.382 [2024-12-11 10:06:42.874190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.382 [2024-12-11 10:06:42.874253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.383 [2024-12-11 10:06:42.874266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.383 [2024-12-11 10:06:42.874272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.383 [2024-12-11 10:06:42.874278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.383 [2024-12-11 10:06:42.874292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.383 qpair failed and we were unable to recover it. 00:28:33.383 [2024-12-11 10:06:42.884215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.383 [2024-12-11 10:06:42.884276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.383 [2024-12-11 10:06:42.884289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.383 [2024-12-11 10:06:42.884296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.383 [2024-12-11 10:06:42.884302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.383 [2024-12-11 10:06:42.884319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.383 qpair failed and we were unable to recover it. 00:28:33.383 [2024-12-11 10:06:42.894332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.383 [2024-12-11 10:06:42.894399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.383 [2024-12-11 10:06:42.894412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.383 [2024-12-11 10:06:42.894419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.383 [2024-12-11 10:06:42.894425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.383 [2024-12-11 10:06:42.894438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.383 qpair failed and we were unable to recover it. 00:28:33.383 [2024-12-11 10:06:42.904267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.383 [2024-12-11 10:06:42.904329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.383 [2024-12-11 10:06:42.904341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.383 [2024-12-11 10:06:42.904348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.383 [2024-12-11 10:06:42.904354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.383 [2024-12-11 10:06:42.904367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.383 qpair failed and we were unable to recover it. 00:28:33.383 [2024-12-11 10:06:42.914337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.383 [2024-12-11 10:06:42.914401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.383 [2024-12-11 10:06:42.914414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.383 [2024-12-11 10:06:42.914420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.383 [2024-12-11 10:06:42.914426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.383 [2024-12-11 10:06:42.914439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.383 qpair failed and we were unable to recover it. 00:28:33.383 [2024-12-11 10:06:42.924331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.383 [2024-12-11 10:06:42.924384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.383 [2024-12-11 10:06:42.924397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.383 [2024-12-11 10:06:42.924404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.383 [2024-12-11 10:06:42.924410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.383 [2024-12-11 10:06:42.924423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.383 qpair failed and we were unable to recover it. 00:28:33.383 [2024-12-11 10:06:42.934369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.383 [2024-12-11 10:06:42.934427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.383 [2024-12-11 10:06:42.934439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.383 [2024-12-11 10:06:42.934446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.383 [2024-12-11 10:06:42.934451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.383 [2024-12-11 10:06:42.934465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.383 qpair failed and we were unable to recover it. 00:28:33.383 [2024-12-11 10:06:42.944492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.383 [2024-12-11 10:06:42.944550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.383 [2024-12-11 10:06:42.944563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.383 [2024-12-11 10:06:42.944570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.383 [2024-12-11 10:06:42.944576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.383 [2024-12-11 10:06:42.944590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.383 qpair failed and we were unable to recover it. 00:28:33.643 [2024-12-11 10:06:42.954578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.643 [2024-12-11 10:06:42.954638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.643 [2024-12-11 10:06:42.954659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.643 [2024-12-11 10:06:42.954670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.643 [2024-12-11 10:06:42.954679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.643 [2024-12-11 10:06:42.954700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-12-11 10:06:42.964520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.643 [2024-12-11 10:06:42.964574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.643 [2024-12-11 10:06:42.964590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.643 [2024-12-11 10:06:42.964598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.643 [2024-12-11 10:06:42.964604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.643 [2024-12-11 10:06:42.964619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-12-11 10:06:42.974475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.643 [2024-12-11 10:06:42.974530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.643 [2024-12-11 10:06:42.974543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.643 [2024-12-11 10:06:42.974553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.643 [2024-12-11 10:06:42.974559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.643 [2024-12-11 10:06:42.974573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-12-11 10:06:42.984497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.643 [2024-12-11 10:06:42.984554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.643 [2024-12-11 10:06:42.984567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.643 [2024-12-11 10:06:42.984574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.643 [2024-12-11 10:06:42.984580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.643 [2024-12-11 10:06:42.984594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-12-11 10:06:42.994528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.643 [2024-12-11 10:06:42.994626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.643 [2024-12-11 10:06:42.994640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.643 [2024-12-11 10:06:42.994648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.643 [2024-12-11 10:06:42.994656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.643 [2024-12-11 10:06:42.994670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-12-11 10:06:43.004550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.643 [2024-12-11 10:06:43.004618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.643 [2024-12-11 10:06:43.004632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.643 [2024-12-11 10:06:43.004639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.643 [2024-12-11 10:06:43.004645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.643 [2024-12-11 10:06:43.004658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.643 qpair failed and we were unable to recover it. 00:28:33.643 [2024-12-11 10:06:43.014685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.643 [2024-12-11 10:06:43.014756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.643 [2024-12-11 10:06:43.014769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.644 [2024-12-11 10:06:43.014775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.644 [2024-12-11 10:06:43.014781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.644 [2024-12-11 10:06:43.014798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-12-11 10:06:43.024752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.644 [2024-12-11 10:06:43.024809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.644 [2024-12-11 10:06:43.024822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.644 [2024-12-11 10:06:43.024828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.644 [2024-12-11 10:06:43.024834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.644 [2024-12-11 10:06:43.024847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-12-11 10:06:43.034694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.644 [2024-12-11 10:06:43.034747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.644 [2024-12-11 10:06:43.034759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.644 [2024-12-11 10:06:43.034766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.644 [2024-12-11 10:06:43.034772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.644 [2024-12-11 10:06:43.034786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-12-11 10:06:43.044738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.644 [2024-12-11 10:06:43.044791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.644 [2024-12-11 10:06:43.044803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.644 [2024-12-11 10:06:43.044810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.644 [2024-12-11 10:06:43.044816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.644 [2024-12-11 10:06:43.044830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-12-11 10:06:43.054701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.644 [2024-12-11 10:06:43.054757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.644 [2024-12-11 10:06:43.054771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.644 [2024-12-11 10:06:43.054777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.644 [2024-12-11 10:06:43.054783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.644 [2024-12-11 10:06:43.054797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-12-11 10:06:43.064723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.644 [2024-12-11 10:06:43.064783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.644 [2024-12-11 10:06:43.064795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.644 [2024-12-11 10:06:43.064802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.644 [2024-12-11 10:06:43.064808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.644 [2024-12-11 10:06:43.064821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-12-11 10:06:43.074821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.644 [2024-12-11 10:06:43.074918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.644 [2024-12-11 10:06:43.074931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.644 [2024-12-11 10:06:43.074938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.644 [2024-12-11 10:06:43.074944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.644 [2024-12-11 10:06:43.074958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-12-11 10:06:43.084768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.644 [2024-12-11 10:06:43.084825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.644 [2024-12-11 10:06:43.084839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.644 [2024-12-11 10:06:43.084846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.644 [2024-12-11 10:06:43.084852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.644 [2024-12-11 10:06:43.084866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-12-11 10:06:43.094806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.644 [2024-12-11 10:06:43.094863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.644 [2024-12-11 10:06:43.094876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.644 [2024-12-11 10:06:43.094883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.644 [2024-12-11 10:06:43.094889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.644 [2024-12-11 10:06:43.094902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-12-11 10:06:43.104919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.644 [2024-12-11 10:06:43.104978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.644 [2024-12-11 10:06:43.104993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.644 [2024-12-11 10:06:43.105003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.644 [2024-12-11 10:06:43.105010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.644 [2024-12-11 10:06:43.105024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-12-11 10:06:43.114859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.644 [2024-12-11 10:06:43.114913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.644 [2024-12-11 10:06:43.114926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.644 [2024-12-11 10:06:43.114932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.644 [2024-12-11 10:06:43.114938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.644 [2024-12-11 10:06:43.114952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-12-11 10:06:43.125011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.644 [2024-12-11 10:06:43.125061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.644 [2024-12-11 10:06:43.125074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.644 [2024-12-11 10:06:43.125081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.644 [2024-12-11 10:06:43.125087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.644 [2024-12-11 10:06:43.125101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-12-11 10:06:43.134932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.644 [2024-12-11 10:06:43.135037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.644 [2024-12-11 10:06:43.135050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.644 [2024-12-11 10:06:43.135057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.644 [2024-12-11 10:06:43.135063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.644 [2024-12-11 10:06:43.135077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.644 [2024-12-11 10:06:43.145015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.644 [2024-12-11 10:06:43.145074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.644 [2024-12-11 10:06:43.145087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.644 [2024-12-11 10:06:43.145093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.644 [2024-12-11 10:06:43.145099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.644 [2024-12-11 10:06:43.145116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.644 qpair failed and we were unable to recover it. 00:28:33.645 [2024-12-11 10:06:43.155044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.645 [2024-12-11 10:06:43.155093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.645 [2024-12-11 10:06:43.155106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.645 [2024-12-11 10:06:43.155113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.645 [2024-12-11 10:06:43.155119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.645 [2024-12-11 10:06:43.155132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-12-11 10:06:43.164996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.645 [2024-12-11 10:06:43.165050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.645 [2024-12-11 10:06:43.165064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.645 [2024-12-11 10:06:43.165071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.645 [2024-12-11 10:06:43.165076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.645 [2024-12-11 10:06:43.165090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-12-11 10:06:43.175143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.645 [2024-12-11 10:06:43.175193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.645 [2024-12-11 10:06:43.175206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.645 [2024-12-11 10:06:43.175213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.645 [2024-12-11 10:06:43.175224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.645 [2024-12-11 10:06:43.175238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-12-11 10:06:43.185101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.645 [2024-12-11 10:06:43.185154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.645 [2024-12-11 10:06:43.185166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.645 [2024-12-11 10:06:43.185173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.645 [2024-12-11 10:06:43.185178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.645 [2024-12-11 10:06:43.185193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-12-11 10:06:43.195082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.645 [2024-12-11 10:06:43.195140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.645 [2024-12-11 10:06:43.195153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.645 [2024-12-11 10:06:43.195160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.645 [2024-12-11 10:06:43.195166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.645 [2024-12-11 10:06:43.195179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-12-11 10:06:43.205176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.645 [2024-12-11 10:06:43.205268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.645 [2024-12-11 10:06:43.205282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.645 [2024-12-11 10:06:43.205288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.645 [2024-12-11 10:06:43.205294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.645 [2024-12-11 10:06:43.205308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.645 [2024-12-11 10:06:43.215227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.645 [2024-12-11 10:06:43.215284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.645 [2024-12-11 10:06:43.215300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.645 [2024-12-11 10:06:43.215308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.645 [2024-12-11 10:06:43.215314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.645 [2024-12-11 10:06:43.215329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.645 qpair failed and we were unable to recover it. 00:28:33.905 [2024-12-11 10:06:43.225255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.905 [2024-12-11 10:06:43.225308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.905 [2024-12-11 10:06:43.225324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.905 [2024-12-11 10:06:43.225332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.905 [2024-12-11 10:06:43.225338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.905 [2024-12-11 10:06:43.225354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.905 qpair failed and we were unable to recover it. 00:28:33.905 [2024-12-11 10:06:43.235273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.905 [2024-12-11 10:06:43.235325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.905 [2024-12-11 10:06:43.235340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.905 [2024-12-11 10:06:43.235350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.905 [2024-12-11 10:06:43.235357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.905 [2024-12-11 10:06:43.235371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.905 qpair failed and we were unable to recover it. 00:28:33.905 [2024-12-11 10:06:43.245302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.905 [2024-12-11 10:06:43.245358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.905 [2024-12-11 10:06:43.245371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.905 [2024-12-11 10:06:43.245378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.905 [2024-12-11 10:06:43.245384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.905 [2024-12-11 10:06:43.245398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.905 qpair failed and we were unable to recover it. 00:28:33.905 [2024-12-11 10:06:43.255337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.905 [2024-12-11 10:06:43.255394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.905 [2024-12-11 10:06:43.255407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.905 [2024-12-11 10:06:43.255414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.905 [2024-12-11 10:06:43.255420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.905 [2024-12-11 10:06:43.255434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.905 qpair failed and we were unable to recover it. 00:28:33.905 [2024-12-11 10:06:43.265360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.905 [2024-12-11 10:06:43.265410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.905 [2024-12-11 10:06:43.265424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.905 [2024-12-11 10:06:43.265430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.905 [2024-12-11 10:06:43.265436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.905 [2024-12-11 10:06:43.265450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.905 qpair failed and we were unable to recover it. 00:28:33.905 [2024-12-11 10:06:43.275388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.905 [2024-12-11 10:06:43.275439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.905 [2024-12-11 10:06:43.275452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.905 [2024-12-11 10:06:43.275459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.905 [2024-12-11 10:06:43.275464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.905 [2024-12-11 10:06:43.275477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.905 qpair failed and we were unable to recover it. 00:28:33.905 [2024-12-11 10:06:43.285443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.905 [2024-12-11 10:06:43.285495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.905 [2024-12-11 10:06:43.285508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.905 [2024-12-11 10:06:43.285515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.905 [2024-12-11 10:06:43.285520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.905 [2024-12-11 10:06:43.285534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.905 qpair failed and we were unable to recover it. 00:28:33.905 [2024-12-11 10:06:43.295462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.905 [2024-12-11 10:06:43.295517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.905 [2024-12-11 10:06:43.295531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.905 [2024-12-11 10:06:43.295537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.905 [2024-12-11 10:06:43.295543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.905 [2024-12-11 10:06:43.295557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.905 qpair failed and we were unable to recover it. 00:28:33.905 [2024-12-11 10:06:43.305483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.905 [2024-12-11 10:06:43.305536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.905 [2024-12-11 10:06:43.305549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.905 [2024-12-11 10:06:43.305556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.905 [2024-12-11 10:06:43.305563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.906 [2024-12-11 10:06:43.305577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.906 qpair failed and we were unable to recover it. 00:28:33.906 [2024-12-11 10:06:43.315493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.906 [2024-12-11 10:06:43.315550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.906 [2024-12-11 10:06:43.315563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.906 [2024-12-11 10:06:43.315570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.906 [2024-12-11 10:06:43.315575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.906 [2024-12-11 10:06:43.315589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.906 qpair failed and we were unable to recover it. 00:28:33.906 [2024-12-11 10:06:43.325531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.906 [2024-12-11 10:06:43.325588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.906 [2024-12-11 10:06:43.325601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.906 [2024-12-11 10:06:43.325608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.906 [2024-12-11 10:06:43.325614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.906 [2024-12-11 10:06:43.325628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.906 qpair failed and we were unable to recover it. 00:28:33.906 [2024-12-11 10:06:43.335574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.906 [2024-12-11 10:06:43.335626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.906 [2024-12-11 10:06:43.335642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.906 [2024-12-11 10:06:43.335649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.906 [2024-12-11 10:06:43.335655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.906 [2024-12-11 10:06:43.335671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.906 qpair failed and we were unable to recover it. 00:28:33.906 [2024-12-11 10:06:43.345589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.906 [2024-12-11 10:06:43.345644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.906 [2024-12-11 10:06:43.345657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.906 [2024-12-11 10:06:43.345664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.906 [2024-12-11 10:06:43.345670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.906 [2024-12-11 10:06:43.345684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.906 qpair failed and we were unable to recover it. 00:28:33.906 [2024-12-11 10:06:43.355625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.906 [2024-12-11 10:06:43.355680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.906 [2024-12-11 10:06:43.355693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.906 [2024-12-11 10:06:43.355700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.906 [2024-12-11 10:06:43.355706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.906 [2024-12-11 10:06:43.355721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.906 qpair failed and we were unable to recover it. 00:28:33.906 [2024-12-11 10:06:43.365645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.906 [2024-12-11 10:06:43.365702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.906 [2024-12-11 10:06:43.365715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.906 [2024-12-11 10:06:43.365725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.906 [2024-12-11 10:06:43.365731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.906 [2024-12-11 10:06:43.365745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.906 qpair failed and we were unable to recover it. 00:28:33.906 [2024-12-11 10:06:43.375698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.906 [2024-12-11 10:06:43.375750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.906 [2024-12-11 10:06:43.375763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.906 [2024-12-11 10:06:43.375769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.906 [2024-12-11 10:06:43.375775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.906 [2024-12-11 10:06:43.375789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.906 qpair failed and we were unable to recover it. 00:28:33.906 [2024-12-11 10:06:43.385724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.906 [2024-12-11 10:06:43.385781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.906 [2024-12-11 10:06:43.385794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.906 [2024-12-11 10:06:43.385801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.906 [2024-12-11 10:06:43.385807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.906 [2024-12-11 10:06:43.385821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.906 qpair failed and we were unable to recover it. 00:28:33.906 [2024-12-11 10:06:43.395740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.906 [2024-12-11 10:06:43.395800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.906 [2024-12-11 10:06:43.395814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.906 [2024-12-11 10:06:43.395821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.906 [2024-12-11 10:06:43.395826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.906 [2024-12-11 10:06:43.395841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.906 qpair failed and we were unable to recover it. 00:28:33.906 [2024-12-11 10:06:43.405773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.906 [2024-12-11 10:06:43.405821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.906 [2024-12-11 10:06:43.405834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.906 [2024-12-11 10:06:43.405841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.906 [2024-12-11 10:06:43.405847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.906 [2024-12-11 10:06:43.405861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.906 qpair failed and we were unable to recover it. 00:28:33.906 [2024-12-11 10:06:43.415808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.906 [2024-12-11 10:06:43.415863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.906 [2024-12-11 10:06:43.415876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.906 [2024-12-11 10:06:43.415883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.906 [2024-12-11 10:06:43.415889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.906 [2024-12-11 10:06:43.415902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.906 qpair failed and we were unable to recover it. 00:28:33.906 [2024-12-11 10:06:43.425773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.906 [2024-12-11 10:06:43.425858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.906 [2024-12-11 10:06:43.425873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.906 [2024-12-11 10:06:43.425880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.906 [2024-12-11 10:06:43.425885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.906 [2024-12-11 10:06:43.425900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.906 qpair failed and we were unable to recover it. 00:28:33.906 [2024-12-11 10:06:43.435917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.906 [2024-12-11 10:06:43.435969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.906 [2024-12-11 10:06:43.435981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.906 [2024-12-11 10:06:43.435988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.906 [2024-12-11 10:06:43.435994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.906 [2024-12-11 10:06:43.436008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.906 qpair failed and we were unable to recover it. 00:28:33.906 [2024-12-11 10:06:43.445884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.906 [2024-12-11 10:06:43.445939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.907 [2024-12-11 10:06:43.445952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.907 [2024-12-11 10:06:43.445959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.907 [2024-12-11 10:06:43.445965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.907 [2024-12-11 10:06:43.445979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.907 qpair failed and we were unable to recover it. 00:28:33.907 [2024-12-11 10:06:43.455911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.907 [2024-12-11 10:06:43.455966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.907 [2024-12-11 10:06:43.455979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.907 [2024-12-11 10:06:43.455986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.907 [2024-12-11 10:06:43.455992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.907 [2024-12-11 10:06:43.456006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.907 qpair failed and we were unable to recover it. 00:28:33.907 [2024-12-11 10:06:43.465993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.907 [2024-12-11 10:06:43.466056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.907 [2024-12-11 10:06:43.466070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.907 [2024-12-11 10:06:43.466077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.907 [2024-12-11 10:06:43.466083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.907 [2024-12-11 10:06:43.466097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.907 qpair failed and we were unable to recover it. 00:28:33.907 [2024-12-11 10:06:43.476026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.907 [2024-12-11 10:06:43.476080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.907 [2024-12-11 10:06:43.476097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.907 [2024-12-11 10:06:43.476104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.907 [2024-12-11 10:06:43.476110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:33.907 [2024-12-11 10:06:43.476124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.907 qpair failed and we were unable to recover it. 00:28:34.167 [2024-12-11 10:06:43.485988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.167 [2024-12-11 10:06:43.486035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.167 [2024-12-11 10:06:43.486052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.167 [2024-12-11 10:06:43.486059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.167 [2024-12-11 10:06:43.486066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.167 [2024-12-11 10:06:43.486082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-12-11 10:06:43.496026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.167 [2024-12-11 10:06:43.496083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.167 [2024-12-11 10:06:43.496097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.167 [2024-12-11 10:06:43.496107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.167 [2024-12-11 10:06:43.496113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.167 [2024-12-11 10:06:43.496128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-12-11 10:06:43.506057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.167 [2024-12-11 10:06:43.506107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.167 [2024-12-11 10:06:43.506121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.167 [2024-12-11 10:06:43.506128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.167 [2024-12-11 10:06:43.506134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.167 [2024-12-11 10:06:43.506149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-12-11 10:06:43.516046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.167 [2024-12-11 10:06:43.516131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.167 [2024-12-11 10:06:43.516145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.167 [2024-12-11 10:06:43.516152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.167 [2024-12-11 10:06:43.516159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.167 [2024-12-11 10:06:43.516172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-12-11 10:06:43.526053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.167 [2024-12-11 10:06:43.526109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.167 [2024-12-11 10:06:43.526123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.167 [2024-12-11 10:06:43.526130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.167 [2024-12-11 10:06:43.526137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.167 [2024-12-11 10:06:43.526151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-12-11 10:06:43.536147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.167 [2024-12-11 10:06:43.536203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.167 [2024-12-11 10:06:43.536221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.167 [2024-12-11 10:06:43.536228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.167 [2024-12-11 10:06:43.536234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.167 [2024-12-11 10:06:43.536248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-12-11 10:06:43.546170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.167 [2024-12-11 10:06:43.546230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.167 [2024-12-11 10:06:43.546244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.167 [2024-12-11 10:06:43.546251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.167 [2024-12-11 10:06:43.546257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.167 [2024-12-11 10:06:43.546271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-12-11 10:06:43.556207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.167 [2024-12-11 10:06:43.556260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.167 [2024-12-11 10:06:43.556273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.167 [2024-12-11 10:06:43.556280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.167 [2024-12-11 10:06:43.556286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.167 [2024-12-11 10:06:43.556300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-12-11 10:06:43.566216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.167 [2024-12-11 10:06:43.566268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.167 [2024-12-11 10:06:43.566281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.167 [2024-12-11 10:06:43.566288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.167 [2024-12-11 10:06:43.566294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.167 [2024-12-11 10:06:43.566308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-12-11 10:06:43.576254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.167 [2024-12-11 10:06:43.576309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.167 [2024-12-11 10:06:43.576323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.167 [2024-12-11 10:06:43.576329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.167 [2024-12-11 10:06:43.576336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.167 [2024-12-11 10:06:43.576350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-12-11 10:06:43.586281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.168 [2024-12-11 10:06:43.586341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.168 [2024-12-11 10:06:43.586354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.168 [2024-12-11 10:06:43.586361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.168 [2024-12-11 10:06:43.586367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.168 [2024-12-11 10:06:43.586380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-12-11 10:06:43.596303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.168 [2024-12-11 10:06:43.596358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.168 [2024-12-11 10:06:43.596372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.168 [2024-12-11 10:06:43.596379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.168 [2024-12-11 10:06:43.596385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.168 [2024-12-11 10:06:43.596398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-12-11 10:06:43.606256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.168 [2024-12-11 10:06:43.606315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.168 [2024-12-11 10:06:43.606328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.168 [2024-12-11 10:06:43.606335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.168 [2024-12-11 10:06:43.606342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.168 [2024-12-11 10:06:43.606355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-12-11 10:06:43.616317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.168 [2024-12-11 10:06:43.616373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.168 [2024-12-11 10:06:43.616386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.168 [2024-12-11 10:06:43.616393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.168 [2024-12-11 10:06:43.616398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.168 [2024-12-11 10:06:43.616412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-12-11 10:06:43.626402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.168 [2024-12-11 10:06:43.626453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.168 [2024-12-11 10:06:43.626466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.168 [2024-12-11 10:06:43.626476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.168 [2024-12-11 10:06:43.626482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.168 [2024-12-11 10:06:43.626496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-12-11 10:06:43.636466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.168 [2024-12-11 10:06:43.636569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.168 [2024-12-11 10:06:43.636585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.168 [2024-12-11 10:06:43.636591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.168 [2024-12-11 10:06:43.636598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.168 [2024-12-11 10:06:43.636611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-12-11 10:06:43.646463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.168 [2024-12-11 10:06:43.646527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.168 [2024-12-11 10:06:43.646540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.168 [2024-12-11 10:06:43.646547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.168 [2024-12-11 10:06:43.646553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.168 [2024-12-11 10:06:43.646567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-12-11 10:06:43.656531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.168 [2024-12-11 10:06:43.656599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.168 [2024-12-11 10:06:43.656612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.168 [2024-12-11 10:06:43.656619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.168 [2024-12-11 10:06:43.656625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.168 [2024-12-11 10:06:43.656638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-12-11 10:06:43.666495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.168 [2024-12-11 10:06:43.666550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.168 [2024-12-11 10:06:43.666563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.168 [2024-12-11 10:06:43.666570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.168 [2024-12-11 10:06:43.666576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.168 [2024-12-11 10:06:43.666590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-12-11 10:06:43.676572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.168 [2024-12-11 10:06:43.676634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.168 [2024-12-11 10:06:43.676647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.168 [2024-12-11 10:06:43.676654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.168 [2024-12-11 10:06:43.676659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.168 [2024-12-11 10:06:43.676673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-12-11 10:06:43.686568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.168 [2024-12-11 10:06:43.686654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.168 [2024-12-11 10:06:43.686668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.168 [2024-12-11 10:06:43.686675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.168 [2024-12-11 10:06:43.686680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.168 [2024-12-11 10:06:43.686694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-12-11 10:06:43.696542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.168 [2024-12-11 10:06:43.696596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.168 [2024-12-11 10:06:43.696609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.168 [2024-12-11 10:06:43.696616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.168 [2024-12-11 10:06:43.696622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.168 [2024-12-11 10:06:43.696636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-12-11 10:06:43.706587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.168 [2024-12-11 10:06:43.706674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.168 [2024-12-11 10:06:43.706687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.168 [2024-12-11 10:06:43.706693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.168 [2024-12-11 10:06:43.706699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.168 [2024-12-11 10:06:43.706713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-12-11 10:06:43.716648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.168 [2024-12-11 10:06:43.716702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.168 [2024-12-11 10:06:43.716716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.168 [2024-12-11 10:06:43.716722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.168 [2024-12-11 10:06:43.716728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.168 [2024-12-11 10:06:43.716741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.169 [2024-12-11 10:06:43.726723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.169 [2024-12-11 10:06:43.726774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.169 [2024-12-11 10:06:43.726787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.169 [2024-12-11 10:06:43.726794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.169 [2024-12-11 10:06:43.726800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.169 [2024-12-11 10:06:43.726814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-12-11 10:06:43.736773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.169 [2024-12-11 10:06:43.736836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.169 [2024-12-11 10:06:43.736852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.169 [2024-12-11 10:06:43.736859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.169 [2024-12-11 10:06:43.736865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.169 [2024-12-11 10:06:43.736880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.429 [2024-12-11 10:06:43.746777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.429 [2024-12-11 10:06:43.746843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.429 [2024-12-11 10:06:43.746860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.429 [2024-12-11 10:06:43.746867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.429 [2024-12-11 10:06:43.746873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.429 [2024-12-11 10:06:43.746888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-12-11 10:06:43.756795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.429 [2024-12-11 10:06:43.756854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.429 [2024-12-11 10:06:43.756868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.429 [2024-12-11 10:06:43.756878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.429 [2024-12-11 10:06:43.756884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.429 [2024-12-11 10:06:43.756899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-12-11 10:06:43.766792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.429 [2024-12-11 10:06:43.766845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.429 [2024-12-11 10:06:43.766858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.429 [2024-12-11 10:06:43.766865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.429 [2024-12-11 10:06:43.766871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.429 [2024-12-11 10:06:43.766885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-12-11 10:06:43.776836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.429 [2024-12-11 10:06:43.776892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.429 [2024-12-11 10:06:43.776905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.429 [2024-12-11 10:06:43.776912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.429 [2024-12-11 10:06:43.776918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.429 [2024-12-11 10:06:43.776932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-12-11 10:06:43.786890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.429 [2024-12-11 10:06:43.786944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.429 [2024-12-11 10:06:43.786957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.429 [2024-12-11 10:06:43.786964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.429 [2024-12-11 10:06:43.786970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.429 [2024-12-11 10:06:43.786984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-12-11 10:06:43.796885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.429 [2024-12-11 10:06:43.796936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.429 [2024-12-11 10:06:43.796949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.429 [2024-12-11 10:06:43.796956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.429 [2024-12-11 10:06:43.796962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.429 [2024-12-11 10:06:43.796976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-12-11 10:06:43.806936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.429 [2024-12-11 10:06:43.807009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.429 [2024-12-11 10:06:43.807023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.429 [2024-12-11 10:06:43.807030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.429 [2024-12-11 10:06:43.807036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.429 [2024-12-11 10:06:43.807049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.430 [2024-12-11 10:06:43.816947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.430 [2024-12-11 10:06:43.817003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.430 [2024-12-11 10:06:43.817016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.430 [2024-12-11 10:06:43.817023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.430 [2024-12-11 10:06:43.817029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.430 [2024-12-11 10:06:43.817043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-12-11 10:06:43.826972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.430 [2024-12-11 10:06:43.827023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.430 [2024-12-11 10:06:43.827036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.430 [2024-12-11 10:06:43.827042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.430 [2024-12-11 10:06:43.827049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.430 [2024-12-11 10:06:43.827062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-12-11 10:06:43.837012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.430 [2024-12-11 10:06:43.837063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.430 [2024-12-11 10:06:43.837077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.430 [2024-12-11 10:06:43.837083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.430 [2024-12-11 10:06:43.837089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.430 [2024-12-11 10:06:43.837103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-12-11 10:06:43.846993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.430 [2024-12-11 10:06:43.847079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.430 [2024-12-11 10:06:43.847092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.430 [2024-12-11 10:06:43.847099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.430 [2024-12-11 10:06:43.847105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.430 [2024-12-11 10:06:43.847118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-12-11 10:06:43.857100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.430 [2024-12-11 10:06:43.857160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.430 [2024-12-11 10:06:43.857174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.430 [2024-12-11 10:06:43.857181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.430 [2024-12-11 10:06:43.857187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.430 [2024-12-11 10:06:43.857201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-12-11 10:06:43.867075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.430 [2024-12-11 10:06:43.867160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.430 [2024-12-11 10:06:43.867174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.430 [2024-12-11 10:06:43.867180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.430 [2024-12-11 10:06:43.867187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.430 [2024-12-11 10:06:43.867200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-12-11 10:06:43.877148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.430 [2024-12-11 10:06:43.877246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.430 [2024-12-11 10:06:43.877260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.430 [2024-12-11 10:06:43.877267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.430 [2024-12-11 10:06:43.877272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.430 [2024-12-11 10:06:43.877286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-12-11 10:06:43.887138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.430 [2024-12-11 10:06:43.887240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.430 [2024-12-11 10:06:43.887253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.430 [2024-12-11 10:06:43.887263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.430 [2024-12-11 10:06:43.887269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.430 [2024-12-11 10:06:43.887282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-12-11 10:06:43.897234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.430 [2024-12-11 10:06:43.897305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.430 [2024-12-11 10:06:43.897319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.430 [2024-12-11 10:06:43.897326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.430 [2024-12-11 10:06:43.897332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.430 [2024-12-11 10:06:43.897345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-12-11 10:06:43.907194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.430 [2024-12-11 10:06:43.907254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.430 [2024-12-11 10:06:43.907267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.430 [2024-12-11 10:06:43.907274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.430 [2024-12-11 10:06:43.907280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.430 [2024-12-11 10:06:43.907294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-12-11 10:06:43.917241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.430 [2024-12-11 10:06:43.917293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.430 [2024-12-11 10:06:43.917306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.430 [2024-12-11 10:06:43.917313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.430 [2024-12-11 10:06:43.917319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.430 [2024-12-11 10:06:43.917332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-12-11 10:06:43.927225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.430 [2024-12-11 10:06:43.927276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.430 [2024-12-11 10:06:43.927290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.430 [2024-12-11 10:06:43.927296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.430 [2024-12-11 10:06:43.927302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.430 [2024-12-11 10:06:43.927316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-12-11 10:06:43.937289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.430 [2024-12-11 10:06:43.937366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.430 [2024-12-11 10:06:43.937379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.430 [2024-12-11 10:06:43.937386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.430 [2024-12-11 10:06:43.937392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.430 [2024-12-11 10:06:43.937406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-12-11 10:06:43.947332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.430 [2024-12-11 10:06:43.947385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.431 [2024-12-11 10:06:43.947397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.431 [2024-12-11 10:06:43.947403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.431 [2024-12-11 10:06:43.947409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.431 [2024-12-11 10:06:43.947423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-12-11 10:06:43.957354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.431 [2024-12-11 10:06:43.957403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.431 [2024-12-11 10:06:43.957417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.431 [2024-12-11 10:06:43.957423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.431 [2024-12-11 10:06:43.957429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.431 [2024-12-11 10:06:43.957443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-12-11 10:06:43.967392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.431 [2024-12-11 10:06:43.967476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.431 [2024-12-11 10:06:43.967489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.431 [2024-12-11 10:06:43.967496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.431 [2024-12-11 10:06:43.967502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.431 [2024-12-11 10:06:43.967516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-12-11 10:06:43.977405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.431 [2024-12-11 10:06:43.977462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.431 [2024-12-11 10:06:43.977475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.431 [2024-12-11 10:06:43.977481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.431 [2024-12-11 10:06:43.977487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.431 [2024-12-11 10:06:43.977500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-12-11 10:06:43.987427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.431 [2024-12-11 10:06:43.987482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.431 [2024-12-11 10:06:43.987495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.431 [2024-12-11 10:06:43.987502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.431 [2024-12-11 10:06:43.987508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.431 [2024-12-11 10:06:43.987522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-12-11 10:06:43.997460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.431 [2024-12-11 10:06:43.997518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.431 [2024-12-11 10:06:43.997533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.431 [2024-12-11 10:06:43.997541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.431 [2024-12-11 10:06:43.997547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.431 [2024-12-11 10:06:43.997563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.691 [2024-12-11 10:06:44.007503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.691 [2024-12-11 10:06:44.007561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.691 [2024-12-11 10:06:44.007577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.691 [2024-12-11 10:06:44.007584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.691 [2024-12-11 10:06:44.007590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.691 [2024-12-11 10:06:44.007606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-12-11 10:06:44.017497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.691 [2024-12-11 10:06:44.017550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.691 [2024-12-11 10:06:44.017565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.691 [2024-12-11 10:06:44.017575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.691 [2024-12-11 10:06:44.017581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.691 [2024-12-11 10:06:44.017596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-12-11 10:06:44.027572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.691 [2024-12-11 10:06:44.027631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.691 [2024-12-11 10:06:44.027645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.691 [2024-12-11 10:06:44.027652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.691 [2024-12-11 10:06:44.027657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.691 [2024-12-11 10:06:44.027672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-12-11 10:06:44.037568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.691 [2024-12-11 10:06:44.037625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.691 [2024-12-11 10:06:44.037639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.691 [2024-12-11 10:06:44.037646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.691 [2024-12-11 10:06:44.037652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.691 [2024-12-11 10:06:44.037667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-12-11 10:06:44.047601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.691 [2024-12-11 10:06:44.047674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.691 [2024-12-11 10:06:44.047687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.691 [2024-12-11 10:06:44.047694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.691 [2024-12-11 10:06:44.047700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.691 [2024-12-11 10:06:44.047714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-12-11 10:06:44.057658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.691 [2024-12-11 10:06:44.057712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.691 [2024-12-11 10:06:44.057725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.691 [2024-12-11 10:06:44.057732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.691 [2024-12-11 10:06:44.057738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.691 [2024-12-11 10:06:44.057752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-12-11 10:06:44.067652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.691 [2024-12-11 10:06:44.067741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.691 [2024-12-11 10:06:44.067754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.691 [2024-12-11 10:06:44.067761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.691 [2024-12-11 10:06:44.067767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.691 [2024-12-11 10:06:44.067780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-12-11 10:06:44.077690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.691 [2024-12-11 10:06:44.077745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.691 [2024-12-11 10:06:44.077758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.691 [2024-12-11 10:06:44.077764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.691 [2024-12-11 10:06:44.077770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.691 [2024-12-11 10:06:44.077783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-12-11 10:06:44.087628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.691 [2024-12-11 10:06:44.087681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.691 [2024-12-11 10:06:44.087694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.691 [2024-12-11 10:06:44.087701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.691 [2024-12-11 10:06:44.087707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.691 [2024-12-11 10:06:44.087720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-12-11 10:06:44.097682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.691 [2024-12-11 10:06:44.097734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.691 [2024-12-11 10:06:44.097747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.691 [2024-12-11 10:06:44.097753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.691 [2024-12-11 10:06:44.097760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.691 [2024-12-11 10:06:44.097773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.691 qpair failed and we were unable to recover it. 00:28:34.691 [2024-12-11 10:06:44.107739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.691 [2024-12-11 10:06:44.107797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.691 [2024-12-11 10:06:44.107810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.691 [2024-12-11 10:06:44.107816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.691 [2024-12-11 10:06:44.107822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.692 [2024-12-11 10:06:44.107836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-12-11 10:06:44.117826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.692 [2024-12-11 10:06:44.117890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.692 [2024-12-11 10:06:44.117903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.692 [2024-12-11 10:06:44.117910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.692 [2024-12-11 10:06:44.117915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.692 [2024-12-11 10:06:44.117929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-12-11 10:06:44.127816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.692 [2024-12-11 10:06:44.127904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.692 [2024-12-11 10:06:44.127918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.692 [2024-12-11 10:06:44.127925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.692 [2024-12-11 10:06:44.127930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.692 [2024-12-11 10:06:44.127945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-12-11 10:06:44.137844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.692 [2024-12-11 10:06:44.137918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.692 [2024-12-11 10:06:44.137931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.692 [2024-12-11 10:06:44.137938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.692 [2024-12-11 10:06:44.137944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.692 [2024-12-11 10:06:44.137958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-12-11 10:06:44.147878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.692 [2024-12-11 10:06:44.147930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.692 [2024-12-11 10:06:44.147943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.692 [2024-12-11 10:06:44.147953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.692 [2024-12-11 10:06:44.147959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.692 [2024-12-11 10:06:44.147973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-12-11 10:06:44.157950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.692 [2024-12-11 10:06:44.158002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.692 [2024-12-11 10:06:44.158016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.692 [2024-12-11 10:06:44.158023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.692 [2024-12-11 10:06:44.158028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.692 [2024-12-11 10:06:44.158042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-12-11 10:06:44.167916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.692 [2024-12-11 10:06:44.167970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.692 [2024-12-11 10:06:44.167984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.692 [2024-12-11 10:06:44.167990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.692 [2024-12-11 10:06:44.167996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.692 [2024-12-11 10:06:44.168011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-12-11 10:06:44.177959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.692 [2024-12-11 10:06:44.178014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.692 [2024-12-11 10:06:44.178026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.692 [2024-12-11 10:06:44.178033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.692 [2024-12-11 10:06:44.178039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.692 [2024-12-11 10:06:44.178053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-12-11 10:06:44.188019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.692 [2024-12-11 10:06:44.188071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.692 [2024-12-11 10:06:44.188084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.692 [2024-12-11 10:06:44.188091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.692 [2024-12-11 10:06:44.188097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.692 [2024-12-11 10:06:44.188111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-12-11 10:06:44.198008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.692 [2024-12-11 10:06:44.198064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.692 [2024-12-11 10:06:44.198078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.692 [2024-12-11 10:06:44.198085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.692 [2024-12-11 10:06:44.198091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.692 [2024-12-11 10:06:44.198104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-12-11 10:06:44.208032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.692 [2024-12-11 10:06:44.208092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.692 [2024-12-11 10:06:44.208105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.692 [2024-12-11 10:06:44.208112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.692 [2024-12-11 10:06:44.208117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.692 [2024-12-11 10:06:44.208131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-12-11 10:06:44.218094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.692 [2024-12-11 10:06:44.218162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.692 [2024-12-11 10:06:44.218176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.692 [2024-12-11 10:06:44.218182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.692 [2024-12-11 10:06:44.218188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.692 [2024-12-11 10:06:44.218202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-12-11 10:06:44.228101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.692 [2024-12-11 10:06:44.228158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.692 [2024-12-11 10:06:44.228171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.692 [2024-12-11 10:06:44.228178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.692 [2024-12-11 10:06:44.228185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.692 [2024-12-11 10:06:44.228199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-12-11 10:06:44.238126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.692 [2024-12-11 10:06:44.238184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.692 [2024-12-11 10:06:44.238198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.692 [2024-12-11 10:06:44.238205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.692 [2024-12-11 10:06:44.238211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.692 [2024-12-11 10:06:44.238228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.692 qpair failed and we were unable to recover it. 00:28:34.692 [2024-12-11 10:06:44.248180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.693 [2024-12-11 10:06:44.248237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.693 [2024-12-11 10:06:44.248251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.693 [2024-12-11 10:06:44.248258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.693 [2024-12-11 10:06:44.248264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.693 [2024-12-11 10:06:44.248278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.693 [2024-12-11 10:06:44.258118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.693 [2024-12-11 10:06:44.258171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.693 [2024-12-11 10:06:44.258186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.693 [2024-12-11 10:06:44.258192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.693 [2024-12-11 10:06:44.258199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.693 [2024-12-11 10:06:44.258213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.693 qpair failed and we were unable to recover it. 00:28:34.952 [2024-12-11 10:06:44.268215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.952 [2024-12-11 10:06:44.268315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.952 [2024-12-11 10:06:44.268332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.952 [2024-12-11 10:06:44.268339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.952 [2024-12-11 10:06:44.268345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.952 [2024-12-11 10:06:44.268362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.952 qpair failed and we were unable to recover it. 00:28:34.953 [2024-12-11 10:06:44.278239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.953 [2024-12-11 10:06:44.278295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.953 [2024-12-11 10:06:44.278311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.953 [2024-12-11 10:06:44.278322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.953 [2024-12-11 10:06:44.278327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.953 [2024-12-11 10:06:44.278343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.953 qpair failed and we were unable to recover it. 00:28:34.953 [2024-12-11 10:06:44.288289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.953 [2024-12-11 10:06:44.288356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.953 [2024-12-11 10:06:44.288370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.953 [2024-12-11 10:06:44.288377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.953 [2024-12-11 10:06:44.288382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.953 [2024-12-11 10:06:44.288396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.953 qpair failed and we were unable to recover it. 00:28:34.953 [2024-12-11 10:06:44.298357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.953 [2024-12-11 10:06:44.298441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.953 [2024-12-11 10:06:44.298456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.953 [2024-12-11 10:06:44.298463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.953 [2024-12-11 10:06:44.298469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.953 [2024-12-11 10:06:44.298484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.953 qpair failed and we were unable to recover it. 00:28:34.953 [2024-12-11 10:06:44.308333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.953 [2024-12-11 10:06:44.308381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.953 [2024-12-11 10:06:44.308394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.953 [2024-12-11 10:06:44.308401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.953 [2024-12-11 10:06:44.308406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.953 [2024-12-11 10:06:44.308420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.953 qpair failed and we were unable to recover it. 00:28:34.953 [2024-12-11 10:06:44.318370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.953 [2024-12-11 10:06:44.318439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.953 [2024-12-11 10:06:44.318453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.953 [2024-12-11 10:06:44.318460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.953 [2024-12-11 10:06:44.318466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.953 [2024-12-11 10:06:44.318481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.953 qpair failed and we were unable to recover it. 00:28:34.953 [2024-12-11 10:06:44.328432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.953 [2024-12-11 10:06:44.328520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.953 [2024-12-11 10:06:44.328533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.953 [2024-12-11 10:06:44.328540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.953 [2024-12-11 10:06:44.328546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.953 [2024-12-11 10:06:44.328560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.953 qpair failed and we were unable to recover it. 00:28:34.953 [2024-12-11 10:06:44.338368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.953 [2024-12-11 10:06:44.338433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.953 [2024-12-11 10:06:44.338448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.953 [2024-12-11 10:06:44.338455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.953 [2024-12-11 10:06:44.338461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.953 [2024-12-11 10:06:44.338476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.953 qpair failed and we were unable to recover it. 00:28:34.953 [2024-12-11 10:06:44.348382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.953 [2024-12-11 10:06:44.348441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.953 [2024-12-11 10:06:44.348455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.953 [2024-12-11 10:06:44.348462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.953 [2024-12-11 10:06:44.348468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.953 [2024-12-11 10:06:44.348482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.953 qpair failed and we were unable to recover it. 00:28:34.953 [2024-12-11 10:06:44.358429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.953 [2024-12-11 10:06:44.358489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.953 [2024-12-11 10:06:44.358503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.953 [2024-12-11 10:06:44.358510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.953 [2024-12-11 10:06:44.358515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.953 [2024-12-11 10:06:44.358529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.953 qpair failed and we were unable to recover it. 00:28:34.953 [2024-12-11 10:06:44.368429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.953 [2024-12-11 10:06:44.368490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.953 [2024-12-11 10:06:44.368509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.953 [2024-12-11 10:06:44.368516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.953 [2024-12-11 10:06:44.368522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.953 [2024-12-11 10:06:44.368536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.953 qpair failed and we were unable to recover it. 00:28:34.953 [2024-12-11 10:06:44.378512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.953 [2024-12-11 10:06:44.378585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.953 [2024-12-11 10:06:44.378598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.953 [2024-12-11 10:06:44.378605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.953 [2024-12-11 10:06:44.378611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.953 [2024-12-11 10:06:44.378625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.953 qpair failed and we were unable to recover it. 00:28:34.953 [2024-12-11 10:06:44.388539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.953 [2024-12-11 10:06:44.388602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.953 [2024-12-11 10:06:44.388614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.953 [2024-12-11 10:06:44.388621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.953 [2024-12-11 10:06:44.388627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.953 [2024-12-11 10:06:44.388641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.953 qpair failed and we were unable to recover it. 00:28:34.953 [2024-12-11 10:06:44.398554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.953 [2024-12-11 10:06:44.398608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.953 [2024-12-11 10:06:44.398621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.953 [2024-12-11 10:06:44.398628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.953 [2024-12-11 10:06:44.398634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.953 [2024-12-11 10:06:44.398648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.953 qpair failed and we were unable to recover it. 00:28:34.953 [2024-12-11 10:06:44.408538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.954 [2024-12-11 10:06:44.408632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.954 [2024-12-11 10:06:44.408646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.954 [2024-12-11 10:06:44.408656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.954 [2024-12-11 10:06:44.408662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.954 [2024-12-11 10:06:44.408675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.954 qpair failed and we were unable to recover it. 00:28:34.954 [2024-12-11 10:06:44.418647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.954 [2024-12-11 10:06:44.418703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.954 [2024-12-11 10:06:44.418716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.954 [2024-12-11 10:06:44.418722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.954 [2024-12-11 10:06:44.418728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.954 [2024-12-11 10:06:44.418742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.954 qpair failed and we were unable to recover it. 00:28:34.954 [2024-12-11 10:06:44.428716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.954 [2024-12-11 10:06:44.428768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.954 [2024-12-11 10:06:44.428781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.954 [2024-12-11 10:06:44.428788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.954 [2024-12-11 10:06:44.428794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.954 [2024-12-11 10:06:44.428808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.954 qpair failed and we were unable to recover it. 00:28:34.954 [2024-12-11 10:06:44.438690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.954 [2024-12-11 10:06:44.438740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.954 [2024-12-11 10:06:44.438753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.954 [2024-12-11 10:06:44.438760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.954 [2024-12-11 10:06:44.438766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.954 [2024-12-11 10:06:44.438779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.954 qpair failed and we were unable to recover it. 00:28:34.954 [2024-12-11 10:06:44.448673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.954 [2024-12-11 10:06:44.448724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.954 [2024-12-11 10:06:44.448738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.954 [2024-12-11 10:06:44.448745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.954 [2024-12-11 10:06:44.448751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.954 [2024-12-11 10:06:44.448765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.954 qpair failed and we were unable to recover it. 00:28:34.954 [2024-12-11 10:06:44.458774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.954 [2024-12-11 10:06:44.458838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.954 [2024-12-11 10:06:44.458852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.954 [2024-12-11 10:06:44.458859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.954 [2024-12-11 10:06:44.458865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.954 [2024-12-11 10:06:44.458879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.954 qpair failed and we were unable to recover it. 00:28:34.954 [2024-12-11 10:06:44.468799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.954 [2024-12-11 10:06:44.468853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.954 [2024-12-11 10:06:44.468865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.954 [2024-12-11 10:06:44.468872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.954 [2024-12-11 10:06:44.468878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.954 [2024-12-11 10:06:44.468891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.954 qpair failed and we were unable to recover it. 00:28:34.954 [2024-12-11 10:06:44.478846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.954 [2024-12-11 10:06:44.478895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.954 [2024-12-11 10:06:44.478908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.954 [2024-12-11 10:06:44.478914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.954 [2024-12-11 10:06:44.478920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.954 [2024-12-11 10:06:44.478935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.954 qpair failed and we were unable to recover it. 00:28:34.954 [2024-12-11 10:06:44.488861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.954 [2024-12-11 10:06:44.488912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.954 [2024-12-11 10:06:44.488925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.954 [2024-12-11 10:06:44.488932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.954 [2024-12-11 10:06:44.488937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.954 [2024-12-11 10:06:44.488951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.954 qpair failed and we were unable to recover it. 00:28:34.954 [2024-12-11 10:06:44.498902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.954 [2024-12-11 10:06:44.498959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.954 [2024-12-11 10:06:44.498975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.954 [2024-12-11 10:06:44.498982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.954 [2024-12-11 10:06:44.498988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.954 [2024-12-11 10:06:44.499002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.954 qpair failed and we were unable to recover it. 00:28:34.954 [2024-12-11 10:06:44.508950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.954 [2024-12-11 10:06:44.509006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.954 [2024-12-11 10:06:44.509019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.954 [2024-12-11 10:06:44.509026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.954 [2024-12-11 10:06:44.509032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.954 [2024-12-11 10:06:44.509046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.954 qpair failed and we were unable to recover it. 00:28:34.954 [2024-12-11 10:06:44.518919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.954 [2024-12-11 10:06:44.518974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.954 [2024-12-11 10:06:44.518987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.954 [2024-12-11 10:06:44.518994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.954 [2024-12-11 10:06:44.519000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:34.954 [2024-12-11 10:06:44.519014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.954 qpair failed and we were unable to recover it. 00:28:35.217 [2024-12-11 10:06:44.528980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.217 [2024-12-11 10:06:44.529059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.217 [2024-12-11 10:06:44.529076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.217 [2024-12-11 10:06:44.529083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.217 [2024-12-11 10:06:44.529089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:35.217 [2024-12-11 10:06:44.529105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:35.217 qpair failed and we were unable to recover it. 00:28:35.217 [2024-12-11 10:06:44.538992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.217 [2024-12-11 10:06:44.539047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.217 [2024-12-11 10:06:44.539063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.217 [2024-12-11 10:06:44.539073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.217 [2024-12-11 10:06:44.539079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfe4500 00:28:35.217 [2024-12-11 10:06:44.539095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:35.217 qpair failed and we were unable to recover it. 00:28:35.217 [2024-12-11 10:06:44.539250] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:28:35.217 A controller has encountered a failure and is being reset. 00:28:35.217 Controller properly reset. 00:28:35.217 Initializing NVMe Controllers 00:28:35.217 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:35.217 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:35.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:35.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:35.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:35.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:35.217 Initialization complete. Launching workers. 00:28:35.217 Starting thread on core 1 00:28:35.217 Starting thread on core 2 00:28:35.217 Starting thread on core 3 00:28:35.217 Starting thread on core 0 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:35.217 00:28:35.217 real 0m10.792s 00:28:35.217 user 0m19.215s 00:28:35.217 sys 0m4.694s 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.217 ************************************ 00:28:35.217 END TEST nvmf_target_disconnect_tc2 00:28:35.217 ************************************ 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:35.217 rmmod nvme_tcp 00:28:35.217 rmmod nvme_fabrics 00:28:35.217 rmmod nvme_keyring 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 250994 ']' 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 250994 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 250994 ']' 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 250994 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 250994 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 250994' 00:28:35.217 killing process with pid 250994 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 250994 00:28:35.217 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 250994 00:28:35.528 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:35.528 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:35.528 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:35.528 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:35.528 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:35.528 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:35.528 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:35.528 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:35.528 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:35.528 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.528 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:35.528 10:06:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.433 10:06:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:37.692 00:28:37.692 real 0m20.416s 00:28:37.692 user 0m47.064s 00:28:37.692 sys 0m10.298s 00:28:37.692 10:06:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.692 10:06:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:37.692 ************************************ 00:28:37.692 END TEST nvmf_target_disconnect 00:28:37.692 ************************************ 00:28:37.692 10:06:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:37.692 00:28:37.692 real 6m8.310s 00:28:37.692 user 10m43.023s 00:28:37.692 sys 2m8.997s 00:28:37.692 10:06:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.692 10:06:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.692 ************************************ 00:28:37.692 END TEST nvmf_host 00:28:37.692 ************************************ 00:28:37.692 10:06:47 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:37.692 10:06:47 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:37.692 10:06:47 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:37.692 10:06:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:37.692 10:06:47 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.692 10:06:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:37.692 ************************************ 00:28:37.692 START TEST nvmf_target_core_interrupt_mode 00:28:37.692 ************************************ 00:28:37.692 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:37.692 * Looking for test storage... 00:28:37.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:37.692 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:37.692 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:28:37.692 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:37.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.952 --rc genhtml_branch_coverage=1 00:28:37.952 --rc genhtml_function_coverage=1 00:28:37.952 --rc genhtml_legend=1 00:28:37.952 --rc geninfo_all_blocks=1 00:28:37.952 --rc geninfo_unexecuted_blocks=1 00:28:37.952 00:28:37.952 ' 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:37.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.952 --rc genhtml_branch_coverage=1 00:28:37.952 --rc genhtml_function_coverage=1 00:28:37.952 --rc genhtml_legend=1 00:28:37.952 --rc geninfo_all_blocks=1 00:28:37.952 --rc geninfo_unexecuted_blocks=1 00:28:37.952 00:28:37.952 ' 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:37.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.952 --rc genhtml_branch_coverage=1 00:28:37.952 --rc genhtml_function_coverage=1 00:28:37.952 --rc genhtml_legend=1 00:28:37.952 --rc geninfo_all_blocks=1 00:28:37.952 --rc geninfo_unexecuted_blocks=1 00:28:37.952 00:28:37.952 ' 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:37.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.952 --rc genhtml_branch_coverage=1 00:28:37.952 --rc genhtml_function_coverage=1 00:28:37.952 --rc genhtml_legend=1 00:28:37.952 --rc geninfo_all_blocks=1 00:28:37.952 --rc geninfo_unexecuted_blocks=1 00:28:37.952 00:28:37.952 ' 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:37.952 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:37.953 ************************************ 00:28:37.953 START TEST nvmf_abort 00:28:37.953 ************************************ 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:37.953 * Looking for test storage... 00:28:37.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.953 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:38.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.213 --rc genhtml_branch_coverage=1 00:28:38.213 --rc genhtml_function_coverage=1 00:28:38.213 --rc genhtml_legend=1 00:28:38.213 --rc geninfo_all_blocks=1 00:28:38.213 --rc geninfo_unexecuted_blocks=1 00:28:38.213 00:28:38.213 ' 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:38.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.213 --rc genhtml_branch_coverage=1 00:28:38.213 --rc genhtml_function_coverage=1 00:28:38.213 --rc genhtml_legend=1 00:28:38.213 --rc geninfo_all_blocks=1 00:28:38.213 --rc geninfo_unexecuted_blocks=1 00:28:38.213 00:28:38.213 ' 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:38.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.213 --rc genhtml_branch_coverage=1 00:28:38.213 --rc genhtml_function_coverage=1 00:28:38.213 --rc genhtml_legend=1 00:28:38.213 --rc geninfo_all_blocks=1 00:28:38.213 --rc geninfo_unexecuted_blocks=1 00:28:38.213 00:28:38.213 ' 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:38.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.213 --rc genhtml_branch_coverage=1 00:28:38.213 --rc genhtml_function_coverage=1 00:28:38.213 --rc genhtml_legend=1 00:28:38.213 --rc geninfo_all_blocks=1 00:28:38.213 --rc geninfo_unexecuted_blocks=1 00:28:38.213 00:28:38.213 ' 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:38.213 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:44.784 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:44.784 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:44.784 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:44.785 Found net devices under 0000:af:00.0: cvl_0_0 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.785 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:44.785 Found net devices under 0000:af:00.1: cvl_0_1 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:44.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:44.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:28:44.785 00:28:44.785 --- 10.0.0.2 ping statistics --- 00:28:44.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.785 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:44.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:44.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:28:44.785 00:28:44.785 --- 10.0.0.1 ping statistics --- 00:28:44.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.785 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=255990 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 255990 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 255990 ']' 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:44.785 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.785 [2024-12-11 10:06:54.339148] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:44.785 [2024-12-11 10:06:54.340034] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:28:44.785 [2024-12-11 10:06:54.340066] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.045 [2024-12-11 10:06:54.423200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:45.045 [2024-12-11 10:06:54.464315] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.045 [2024-12-11 10:06:54.464350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.045 [2024-12-11 10:06:54.464356] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.045 [2024-12-11 10:06:54.464365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.045 [2024-12-11 10:06:54.464370] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.045 [2024-12-11 10:06:54.465689] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.045 [2024-12-11 10:06:54.465718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.045 [2024-12-11 10:06:54.465719] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:45.045 [2024-12-11 10:06:54.532545] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:45.045 [2024-12-11 10:06:54.533356] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:45.045 [2024-12-11 10:06:54.533549] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:45.045 [2024-12-11 10:06:54.533712] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:45.045 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:45.045 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:45.045 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:45.045 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:45.045 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:45.045 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.045 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:45.045 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.045 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:45.045 [2024-12-11 10:06:54.594641] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.045 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.045 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:45.045 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.045 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:45.304 Malloc0 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:45.304 Delay0 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:45.304 [2024-12-11 10:06:54.690504] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.304 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:45.304 [2024-12-11 10:06:54.859380] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:47.836 Initializing NVMe Controllers 00:28:47.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:47.836 controller IO queue size 128 less than required 00:28:47.836 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:47.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:47.836 Initialization complete. Launching workers. 00:28:47.836 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37864 00:28:47.836 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37921, failed to submit 66 00:28:47.836 success 37864, unsuccessful 57, failed 0 00:28:47.836 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:47.836 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.836 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:47.837 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.837 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:47.837 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:47.837 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:47.837 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:47.837 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:47.837 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:47.837 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:47.837 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:47.837 rmmod nvme_tcp 00:28:47.837 rmmod nvme_fabrics 00:28:47.837 rmmod nvme_keyring 00:28:47.837 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:47.837 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:47.837 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:47.837 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 255990 ']' 00:28:47.837 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 255990 00:28:47.837 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 255990 ']' 00:28:47.837 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 255990 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 255990 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 255990' 00:28:47.837 killing process with pid 255990 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 255990 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 255990 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.837 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.743 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:49.743 00:28:49.743 real 0m11.945s 00:28:49.743 user 0m10.675s 00:28:49.743 sys 0m6.294s 00:28:49.743 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:49.743 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:49.743 ************************************ 00:28:49.743 END TEST nvmf_abort 00:28:49.743 ************************************ 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:50.002 ************************************ 00:28:50.002 START TEST nvmf_ns_hotplug_stress 00:28:50.002 ************************************ 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:50.002 * Looking for test storage... 00:28:50.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:50.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.002 --rc genhtml_branch_coverage=1 00:28:50.002 --rc genhtml_function_coverage=1 00:28:50.002 --rc genhtml_legend=1 00:28:50.002 --rc geninfo_all_blocks=1 00:28:50.002 --rc geninfo_unexecuted_blocks=1 00:28:50.002 00:28:50.002 ' 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:50.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.002 --rc genhtml_branch_coverage=1 00:28:50.002 --rc genhtml_function_coverage=1 00:28:50.002 --rc genhtml_legend=1 00:28:50.002 --rc geninfo_all_blocks=1 00:28:50.002 --rc geninfo_unexecuted_blocks=1 00:28:50.002 00:28:50.002 ' 00:28:50.002 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:50.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.002 --rc genhtml_branch_coverage=1 00:28:50.002 --rc genhtml_function_coverage=1 00:28:50.002 --rc genhtml_legend=1 00:28:50.003 --rc geninfo_all_blocks=1 00:28:50.003 --rc geninfo_unexecuted_blocks=1 00:28:50.003 00:28:50.003 ' 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:50.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.003 --rc genhtml_branch_coverage=1 00:28:50.003 --rc genhtml_function_coverage=1 00:28:50.003 --rc genhtml_legend=1 00:28:50.003 --rc geninfo_all_blocks=1 00:28:50.003 --rc geninfo_unexecuted_blocks=1 00:28:50.003 00:28:50.003 ' 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.003 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:50.262 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.262 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.262 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.262 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.262 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.262 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.262 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:50.263 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:56.832 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:56.832 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:56.832 Found net devices under 0000:af:00.0: cvl_0_0 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:56.832 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:56.833 Found net devices under 0000:af:00.1: cvl_0_1 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:56.833 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:56.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:28:56.833 00:28:56.833 --- 10.0.0.2 ping statistics --- 00:28:56.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.833 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:56.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:28:56.833 00:28:56.833 --- 10.0.0.1 ping statistics --- 00:28:56.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.833 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=260442 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 260442 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 260442 ']' 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.833 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:56.833 [2024-12-11 10:07:06.343887] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:56.833 [2024-12-11 10:07:06.344819] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:28:56.833 [2024-12-11 10:07:06.344855] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:57.093 [2024-12-11 10:07:06.427582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:57.093 [2024-12-11 10:07:06.467199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.093 [2024-12-11 10:07:06.467237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.093 [2024-12-11 10:07:06.467245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.093 [2024-12-11 10:07:06.467250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.093 [2024-12-11 10:07:06.467255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.093 [2024-12-11 10:07:06.468557] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:57.093 [2024-12-11 10:07:06.468661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.093 [2024-12-11 10:07:06.468662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:57.093 [2024-12-11 10:07:06.536509] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:57.093 [2024-12-11 10:07:06.537268] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:57.093 [2024-12-11 10:07:06.537649] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:57.093 [2024-12-11 10:07:06.537760] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:57.093 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:57.093 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:57.093 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:57.093 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:57.093 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:57.093 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.093 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:57.093 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:57.351 [2024-12-11 10:07:06.769370] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.352 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:57.611 10:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:57.611 [2024-12-11 10:07:07.165737] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.869 10:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:57.869 10:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:58.128 Malloc0 00:28:58.128 10:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:58.387 Delay0 00:28:58.387 10:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:58.645 10:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:58.645 NULL1 00:28:58.645 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:58.904 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=260720 00:28:58.904 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:58.904 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:28:58.904 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.162 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:59.162 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:59.162 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:59.420 true 00:28:59.420 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:28:59.420 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.678 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:59.936 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:59.936 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:00.193 true 00:29:00.193 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:00.193 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:01.127 Read completed with error (sct=0, sc=11) 00:29:01.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:01.127 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:01.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:01.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:01.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:01.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:01.385 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:01.385 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:01.644 true 00:29:01.644 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:01.644 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.578 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:02.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.579 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:02.579 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:02.837 true 00:29:02.837 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:02.837 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:03.095 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:03.095 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:03.095 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:03.353 true 00:29:03.354 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:03.354 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.729 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.729 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:04.729 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:04.729 true 00:29:04.729 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:04.729 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.987 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:05.246 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:05.246 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:05.504 true 00:29:05.504 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:05.504 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.878 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:06.878 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.878 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:06.878 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:06.878 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:07.137 true 00:29:07.137 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:07.137 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.137 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:07.396 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:07.396 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:07.655 true 00:29:07.655 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:07.655 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.591 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:08.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.849 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:08.849 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:09.108 true 00:29:09.108 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:09.108 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.043 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.043 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:10.043 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:10.301 true 00:29:10.301 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:10.301 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.559 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.817 10:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:10.817 10:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:11.076 true 00:29:11.076 10:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:11.076 10:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:12.010 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.010 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.010 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.010 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.268 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.268 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:12.268 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:12.525 true 00:29:12.525 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:12.525 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:13.459 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:13.459 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:13.459 10:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:13.718 true 00:29:13.718 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:13.718 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:13.976 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.235 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:14.235 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:14.235 true 00:29:14.235 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:14.235 10:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.611 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:15.611 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:15.611 10:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:15.611 true 00:29:15.611 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:15.611 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.868 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.126 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:16.126 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:16.384 true 00:29:16.384 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:16.384 10:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:17.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.319 10:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:17.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.577 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:17.577 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:17.835 true 00:29:17.835 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:17.835 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:18.770 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:18.770 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:18.770 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:19.028 true 00:29:19.028 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:19.028 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:19.287 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:19.287 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:19.287 10:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:19.545 true 00:29:19.545 10:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:19.545 10:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:20.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.921 10:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:20.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.921 10:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:20.921 10:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:21.216 true 00:29:21.216 10:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:21.216 10:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.805 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:22.063 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:22.063 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:22.321 true 00:29:22.321 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:22.321 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.580 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:22.838 10:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:22.838 10:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:22.838 true 00:29:22.838 10:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:22.838 10:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:24.213 10:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:24.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:24.213 10:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:24.213 10:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:24.213 true 00:29:24.471 10:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:24.471 10:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.471 10:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:24.730 10:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:24.730 10:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:24.988 true 00:29:24.988 10:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:24.988 10:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.362 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:26.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.362 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:26.362 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:26.621 true 00:29:26.621 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:26.621 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.555 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.555 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:27.555 10:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:27.814 true 00:29:27.814 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:27.814 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:28.073 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:28.073 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:28.073 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:28.331 true 00:29:28.331 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:28.331 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:29.266 Initializing NVMe Controllers 00:29:29.266 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:29.266 Controller IO queue size 128, less than required. 00:29:29.266 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:29.266 Controller IO queue size 128, less than required. 00:29:29.266 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:29.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:29.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:29.266 Initialization complete. Launching workers. 00:29:29.266 ======================================================== 00:29:29.266 Latency(us) 00:29:29.266 Device Information : IOPS MiB/s Average min max 00:29:29.266 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1688.93 0.82 48645.01 2724.79 1017370.91 00:29:29.266 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17103.93 8.35 7465.25 1567.95 444444.04 00:29:29.266 ======================================================== 00:29:29.266 Total : 18792.87 9.18 11166.11 1567.95 1017370.91 00:29:29.266 00:29:29.266 10:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:29.524 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:29:29.524 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:29:29.782 true 00:29:29.782 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 260720 00:29:29.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (260720) - No such process 00:29:29.782 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 260720 00:29:29.782 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:30.041 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:30.041 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:30.041 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:30.041 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:30.041 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:30.041 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:30.299 null0 00:29:30.299 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:30.299 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:30.299 10:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:30.557 null1 00:29:30.557 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:30.557 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:30.557 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:30.815 null2 00:29:30.815 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:30.815 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:30.815 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:30.815 null3 00:29:31.073 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:31.073 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:31.073 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:31.073 null4 00:29:31.073 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:31.073 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:31.073 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:31.331 null5 00:29:31.331 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:31.331 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:31.331 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:31.588 null6 00:29:31.588 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:31.588 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:31.588 10:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:31.588 null7 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:31.846 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 266008 266010 266011 266013 266015 266017 266019 266021 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:31.847 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.105 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:32.364 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.364 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:32.364 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:32.364 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:32.364 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:32.364 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:32.364 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:32.364 10:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.622 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:32.881 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:33.140 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.140 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.140 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:33.140 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:33.140 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:33.140 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.140 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:33.140 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:33.140 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:33.140 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:33.140 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.398 10:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:33.657 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.657 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:33.657 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:33.657 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:33.657 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:33.657 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:33.657 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:33.657 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:33.916 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.175 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:34.434 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:34.434 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.434 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:34.434 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:34.434 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:34.434 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:34.434 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:34.434 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.693 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.952 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:35.211 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:35.211 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:35.211 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:35.211 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:35.211 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:35.211 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:35.211 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.211 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.470 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:35.728 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:35.728 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:35.728 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:35.728 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:35.728 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:35.728 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.728 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:35.728 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:35.987 rmmod nvme_tcp 00:29:35.987 rmmod nvme_fabrics 00:29:35.987 rmmod nvme_keyring 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 260442 ']' 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 260442 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 260442 ']' 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 260442 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 260442 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 260442' 00:29:35.987 killing process with pid 260442 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 260442 00:29:35.987 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 260442 00:29:36.246 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:36.246 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:36.246 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:36.246 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:36.246 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:36.246 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:36.246 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:36.246 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:36.246 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:36.246 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.246 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.246 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.150 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:38.150 00:29:38.150 real 0m48.324s 00:29:38.150 user 2m58.172s 00:29:38.150 sys 0m19.904s 00:29:38.150 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:38.150 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:38.150 ************************************ 00:29:38.150 END TEST nvmf_ns_hotplug_stress 00:29:38.150 ************************************ 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:38.409 ************************************ 00:29:38.409 START TEST nvmf_delete_subsystem 00:29:38.409 ************************************ 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:38.409 * Looking for test storage... 00:29:38.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:38.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.409 --rc genhtml_branch_coverage=1 00:29:38.409 --rc genhtml_function_coverage=1 00:29:38.409 --rc genhtml_legend=1 00:29:38.409 --rc geninfo_all_blocks=1 00:29:38.409 --rc geninfo_unexecuted_blocks=1 00:29:38.409 00:29:38.409 ' 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:38.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.409 --rc genhtml_branch_coverage=1 00:29:38.409 --rc genhtml_function_coverage=1 00:29:38.409 --rc genhtml_legend=1 00:29:38.409 --rc geninfo_all_blocks=1 00:29:38.409 --rc geninfo_unexecuted_blocks=1 00:29:38.409 00:29:38.409 ' 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:38.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.409 --rc genhtml_branch_coverage=1 00:29:38.409 --rc genhtml_function_coverage=1 00:29:38.409 --rc genhtml_legend=1 00:29:38.409 --rc geninfo_all_blocks=1 00:29:38.409 --rc geninfo_unexecuted_blocks=1 00:29:38.409 00:29:38.409 ' 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:38.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.409 --rc genhtml_branch_coverage=1 00:29:38.409 --rc genhtml_function_coverage=1 00:29:38.409 --rc genhtml_legend=1 00:29:38.409 --rc geninfo_all_blocks=1 00:29:38.409 --rc geninfo_unexecuted_blocks=1 00:29:38.409 00:29:38.409 ' 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:38.409 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.410 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:38.668 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:45.237 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:45.238 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:45.238 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:45.238 Found net devices under 0000:af:00.0: cvl_0_0 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:45.238 Found net devices under 0000:af:00.1: cvl_0_1 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:45.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:45.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:29:45.238 00:29:45.238 --- 10.0.0.2 ping statistics --- 00:29:45.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.238 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:45.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:45.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:29:45.238 00:29:45.238 --- 10.0.0.1 ping statistics --- 00:29:45.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.238 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=270823 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 270823 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 270823 ']' 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:45.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:45.238 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:45.239 [2024-12-11 10:07:54.766108] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:45.239 [2024-12-11 10:07:54.767015] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:29:45.239 [2024-12-11 10:07:54.767046] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:45.498 [2024-12-11 10:07:54.848492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:45.498 [2024-12-11 10:07:54.885957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:45.498 [2024-12-11 10:07:54.885994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:45.498 [2024-12-11 10:07:54.886002] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:45.498 [2024-12-11 10:07:54.886009] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:45.498 [2024-12-11 10:07:54.886013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:45.498 [2024-12-11 10:07:54.887082] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.498 [2024-12-11 10:07:54.887084] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.498 [2024-12-11 10:07:54.953596] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:45.498 [2024-12-11 10:07:54.954130] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:45.498 [2024-12-11 10:07:54.954408] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:45.498 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:45.498 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:45.498 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:45.498 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:45.498 10:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:45.498 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:45.498 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:45.498 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.498 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:45.498 [2024-12-11 10:07:55.031871] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.498 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.498 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:45.498 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.498 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:45.498 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.498 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:45.498 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.498 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:45.498 [2024-12-11 10:07:55.060205] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:45.498 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.498 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:45.498 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.498 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:45.757 NULL1 00:29:45.757 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.757 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:45.757 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.757 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:45.757 Delay0 00:29:45.757 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.757 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:45.757 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.757 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:45.757 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.757 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=270872 00:29:45.757 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:45.757 10:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:45.757 [2024-12-11 10:07:55.172430] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:47.657 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:47.657 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.657 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 starting I/O failed: -6 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 starting I/O failed: -6 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 starting I/O failed: -6 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 starting I/O failed: -6 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 starting I/O failed: -6 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 starting I/O failed: -6 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 starting I/O failed: -6 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 starting I/O failed: -6 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 starting I/O failed: -6 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 starting I/O failed: -6 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.916 Write completed with error (sct=0, sc=8) 00:29:47.916 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 [2024-12-11 10:07:57.266764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f95780 is same with the state(6) to be set 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 starting I/O failed: -6 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 starting I/O failed: -6 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 starting I/O failed: -6 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 starting I/O failed: -6 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 starting I/O failed: -6 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 starting I/O failed: -6 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 starting I/O failed: -6 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 starting I/O failed: -6 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 starting I/O failed: -6 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 starting I/O failed: -6 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 starting I/O failed: -6 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 starting I/O failed: -6 00:29:47.917 [2024-12-11 10:07:57.267135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4410000c80 is same with the state(6) to be set 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:47.917 Write completed with error (sct=0, sc=8) 00:29:47.917 Read completed with error (sct=0, sc=8) 00:29:48.853 [2024-12-11 10:07:58.226615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f969b0 is same with the state(6) to be set 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 [2024-12-11 10:07:58.270990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f95960 is same with the state(6) to be set 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 [2024-12-11 10:07:58.271097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f952c0 is same with the state(6) to be set 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 [2024-12-11 10:07:58.271339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f441000d060 is same with the state(6) to be set 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.853 Write completed with error (sct=0, sc=8) 00:29:48.853 Read completed with error (sct=0, sc=8) 00:29:48.854 [2024-12-11 10:07:58.271807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f441000d6c0 is same with the state(6) to be set 00:29:48.854 Initializing NVMe Controllers 00:29:48.854 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:48.854 Controller IO queue size 128, less than required. 00:29:48.854 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:48.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:48.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:48.854 Initialization complete. Launching workers. 00:29:48.854 ======================================================== 00:29:48.854 Latency(us) 00:29:48.854 Device Information : IOPS MiB/s Average min max 00:29:48.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.25 0.08 911393.33 312.50 1013196.74 00:29:48.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.21 0.08 897143.89 276.51 1013325.94 00:29:48.854 ======================================================== 00:29:48.854 Total : 332.46 0.16 904141.00 276.51 1013325.94 00:29:48.854 00:29:48.854 [2024-12-11 10:07:58.272369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f969b0 (9): Bad file descriptor 00:29:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:48.854 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.854 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:48.854 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 270872 00:29:48.854 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:49.421 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:49.421 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 270872 00:29:49.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (270872) - No such process 00:29:49.421 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 270872 00:29:49.421 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:49.421 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 270872 00:29:49.421 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:49.421 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:49.421 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:49.421 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 270872 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:49.422 [2024-12-11 10:07:58.804052] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=271433 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 271433 00:29:49.422 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:49.422 [2024-12-11 10:07:58.885404] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:49.989 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:49.989 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 271433 00:29:49.989 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:50.555 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:50.555 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 271433 00:29:50.555 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:50.813 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:50.813 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 271433 00:29:50.813 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:51.380 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:51.380 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 271433 00:29:51.380 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:51.947 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:51.947 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 271433 00:29:51.947 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:52.532 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:52.532 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 271433 00:29:52.532 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:52.532 Initializing NVMe Controllers 00:29:52.532 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:52.532 Controller IO queue size 128, less than required. 00:29:52.532 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:52.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:52.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:52.532 Initialization complete. Launching workers. 00:29:52.532 ======================================================== 00:29:52.532 Latency(us) 00:29:52.532 Device Information : IOPS MiB/s Average min max 00:29:52.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003195.51 1000177.16 1042883.23 00:29:52.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003670.84 1000231.32 1042461.57 00:29:52.532 ======================================================== 00:29:52.532 Total : 256.00 0.12 1003433.18 1000177.16 1042883.23 00:29:52.532 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 271433 00:29:52.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (271433) - No such process 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 271433 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:52.894 rmmod nvme_tcp 00:29:52.894 rmmod nvme_fabrics 00:29:52.894 rmmod nvme_keyring 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 270823 ']' 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 270823 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 270823 ']' 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 270823 00:29:52.894 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 270823 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 270823' 00:29:53.155 killing process with pid 270823 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 270823 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 270823 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.155 10:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:55.689 00:29:55.689 real 0m16.940s 00:29:55.689 user 0m26.297s 00:29:55.689 sys 0m6.670s 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:55.689 ************************************ 00:29:55.689 END TEST nvmf_delete_subsystem 00:29:55.689 ************************************ 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:55.689 ************************************ 00:29:55.689 START TEST nvmf_host_management 00:29:55.689 ************************************ 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:55.689 * Looking for test storage... 00:29:55.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:55.689 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:55.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.689 --rc genhtml_branch_coverage=1 00:29:55.689 --rc genhtml_function_coverage=1 00:29:55.689 --rc genhtml_legend=1 00:29:55.690 --rc geninfo_all_blocks=1 00:29:55.690 --rc geninfo_unexecuted_blocks=1 00:29:55.690 00:29:55.690 ' 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:55.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.690 --rc genhtml_branch_coverage=1 00:29:55.690 --rc genhtml_function_coverage=1 00:29:55.690 --rc genhtml_legend=1 00:29:55.690 --rc geninfo_all_blocks=1 00:29:55.690 --rc geninfo_unexecuted_blocks=1 00:29:55.690 00:29:55.690 ' 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:55.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.690 --rc genhtml_branch_coverage=1 00:29:55.690 --rc genhtml_function_coverage=1 00:29:55.690 --rc genhtml_legend=1 00:29:55.690 --rc geninfo_all_blocks=1 00:29:55.690 --rc geninfo_unexecuted_blocks=1 00:29:55.690 00:29:55.690 ' 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:55.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.690 --rc genhtml_branch_coverage=1 00:29:55.690 --rc genhtml_function_coverage=1 00:29:55.690 --rc genhtml_legend=1 00:29:55.690 --rc geninfo_all_blocks=1 00:29:55.690 --rc geninfo_unexecuted_blocks=1 00:29:55.690 00:29:55.690 ' 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.690 10:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:55.690 10:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:02.258 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:02.258 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:02.258 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:02.259 Found net devices under 0000:af:00.0: cvl_0_0 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:02.259 Found net devices under 0000:af:00.1: cvl_0_1 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:02.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:02.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:30:02.259 00:30:02.259 --- 10.0.0.2 ping statistics --- 00:30:02.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.259 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:02.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:02.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:30:02.259 00:30:02.259 --- 10.0.0.1 ping statistics --- 00:30:02.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.259 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=275825 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 275825 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 275825 ']' 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:02.259 10:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:02.259 [2024-12-11 10:08:11.760858] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:02.259 [2024-12-11 10:08:11.761785] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:30:02.259 [2024-12-11 10:08:11.761821] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.519 [2024-12-11 10:08:11.848170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:02.519 [2024-12-11 10:08:11.891099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:02.519 [2024-12-11 10:08:11.891133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:02.519 [2024-12-11 10:08:11.891140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:02.519 [2024-12-11 10:08:11.891146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:02.519 [2024-12-11 10:08:11.891152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:02.519 [2024-12-11 10:08:11.892536] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:02.519 [2024-12-11 10:08:11.892644] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:02.519 [2024-12-11 10:08:11.892749] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.519 [2024-12-11 10:08:11.892750] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:30:02.519 [2024-12-11 10:08:11.960201] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:02.519 [2024-12-11 10:08:11.961343] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:02.519 [2024-12-11 10:08:11.961521] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:02.519 [2024-12-11 10:08:11.961862] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:02.519 [2024-12-11 10:08:11.961898] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:03.087 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:03.087 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:03.087 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:03.087 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:03.087 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.087 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:03.087 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:03.087 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.087 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.087 [2024-12-11 10:08:12.621625] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:03.087 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.087 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:03.087 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:03.087 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.087 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:03.087 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:03.087 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:03.087 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.347 Malloc0 00:30:03.347 [2024-12-11 10:08:12.709729] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=276059 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 276059 /var/tmp/bdevperf.sock 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 276059 ']' 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:03.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:03.347 { 00:30:03.347 "params": { 00:30:03.347 "name": "Nvme$subsystem", 00:30:03.347 "trtype": "$TEST_TRANSPORT", 00:30:03.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.347 "adrfam": "ipv4", 00:30:03.347 "trsvcid": "$NVMF_PORT", 00:30:03.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.347 "hdgst": ${hdgst:-false}, 00:30:03.347 "ddgst": ${ddgst:-false} 00:30:03.347 }, 00:30:03.347 "method": "bdev_nvme_attach_controller" 00:30:03.347 } 00:30:03.347 EOF 00:30:03.347 )") 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:03.347 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:03.347 "params": { 00:30:03.347 "name": "Nvme0", 00:30:03.347 "trtype": "tcp", 00:30:03.347 "traddr": "10.0.0.2", 00:30:03.347 "adrfam": "ipv4", 00:30:03.347 "trsvcid": "4420", 00:30:03.347 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:03.347 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:03.347 "hdgst": false, 00:30:03.347 "ddgst": false 00:30:03.347 }, 00:30:03.347 "method": "bdev_nvme_attach_controller" 00:30:03.347 }' 00:30:03.347 [2024-12-11 10:08:12.806994] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:30:03.347 [2024-12-11 10:08:12.807042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276059 ] 00:30:03.347 [2024-12-11 10:08:12.887025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.606 [2024-12-11 10:08:12.926861] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.865 Running I/O for 10 seconds... 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.124 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.385 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:30:04.385 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:30:04.385 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:04.385 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:04.385 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:04.385 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:04.385 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.385 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.385 [2024-12-11 10:08:13.721361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.385 [2024-12-11 10:08:13.721402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.385 [2024-12-11 10:08:13.721410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.385 [2024-12-11 10:08:13.721416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.385 [2024-12-11 10:08:13.721423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.385 [2024-12-11 10:08:13.721429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.385 [2024-12-11 10:08:13.721441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.385 [2024-12-11 10:08:13.721447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.385 [2024-12-11 10:08:13.721453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.385 [2024-12-11 10:08:13.721459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.385 [2024-12-11 10:08:13.721465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.385 [2024-12-11 10:08:13.721471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282af0 is same with the state(6) to be set 00:30:04.386 [2024-12-11 10:08:13.721996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.386 [2024-12-11 10:08:13.722028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.386 [2024-12-11 10:08:13.722045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.386 [2024-12-11 10:08:13.722053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.386 [2024-12-11 10:08:13.722062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.386 [2024-12-11 10:08:13.722068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.386 [2024-12-11 10:08:13.722076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.386 [2024-12-11 10:08:13.722083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.386 [2024-12-11 10:08:13.722091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.386 [2024-12-11 10:08:13.722097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.386 [2024-12-11 10:08:13.722105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.386 [2024-12-11 10:08:13.722112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.386 [2024-12-11 10:08:13.722120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.386 [2024-12-11 10:08:13.722126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.386 [2024-12-11 10:08:13.722134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.386 [2024-12-11 10:08:13.722141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.386 [2024-12-11 10:08:13.722149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.386 [2024-12-11 10:08:13.722156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.386 [2024-12-11 10:08:13.722164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.386 [2024-12-11 10:08:13.722171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.386 [2024-12-11 10:08:13.722179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.386 [2024-12-11 10:08:13.722185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.386 [2024-12-11 10:08:13.722193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.386 [2024-12-11 10:08:13.722204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.386 [2024-12-11 10:08:13.722212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.386 [2024-12-11 10:08:13.722227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.386 [2024-12-11 10:08:13.722235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.386 [2024-12-11 10:08:13.722241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.386 [2024-12-11 10:08:13.722249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.386 [2024-12-11 10:08:13.722255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.386 [2024-12-11 10:08:13.722263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.386 [2024-12-11 10:08:13.722270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.386 [2024-12-11 10:08:13.722279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.387 [2024-12-11 10:08:13.722865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.387 [2024-12-11 10:08:13.722871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.388 [2024-12-11 10:08:13.722880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.388 [2024-12-11 10:08:13.722887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.388 [2024-12-11 10:08:13.722895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.388 [2024-12-11 10:08:13.722901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.388 [2024-12-11 10:08:13.722909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.388 [2024-12-11 10:08:13.722915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.388 [2024-12-11 10:08:13.722924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.388 [2024-12-11 10:08:13.722931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.388 [2024-12-11 10:08:13.722939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.388 [2024-12-11 10:08:13.722948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.388 [2024-12-11 10:08:13.722956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.388 [2024-12-11 10:08:13.722963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.388 [2024-12-11 10:08:13.722970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.388 [2024-12-11 10:08:13.722977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.388 [2024-12-11 10:08:13.722984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fb50 is same with the state(6) to be set 00:30:04.388 [2024-12-11 10:08:13.723946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:04.388 task offset: 114688 on job bdev=Nvme0n1 fails 00:30:04.388 00:30:04.388 Latency(us) 00:30:04.388 [2024-12-11T09:08:13.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.388 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:04.388 Job: Nvme0n1 ended in about 0.48 seconds with error 00:30:04.388 Verification LBA range: start 0x0 length 0x400 00:30:04.388 Nvme0n1 : 0.48 1860.46 116.28 132.89 0.00 31362.38 3760.52 26963.38 00:30:04.388 [2024-12-11T09:08:13.963Z] =================================================================================================================== 00:30:04.388 [2024-12-11T09:08:13.963Z] Total : 1860.46 116.28 132.89 0.00 31362.38 3760.52 26963.38 00:30:04.388 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.388 [2024-12-11 10:08:13.726385] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:04.388 [2024-12-11 10:08:13.726408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1066b20 (9): Bad file descriptor 00:30:04.388 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:04.388 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.388 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.388 [2024-12-11 10:08:13.727422] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:30:04.388 [2024-12-11 10:08:13.727492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:04.388 [2024-12-11 10:08:13.727514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.388 [2024-12-11 10:08:13.727529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:30:04.388 [2024-12-11 10:08:13.727537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:30:04.388 [2024-12-11 10:08:13.727544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.388 [2024-12-11 10:08:13.727551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1066b20 00:30:04.388 [2024-12-11 10:08:13.727570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1066b20 (9): Bad file descriptor 00:30:04.388 [2024-12-11 10:08:13.727581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:04.388 [2024-12-11 10:08:13.727591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:04.388 [2024-12-11 10:08:13.727599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:04.388 [2024-12-11 10:08:13.727608] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:04.388 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.388 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:05.324 10:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 276059 00:30:05.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (276059) - No such process 00:30:05.324 10:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:05.325 10:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:05.325 10:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:05.325 10:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:05.325 10:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:05.325 10:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:05.325 10:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.325 10:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.325 { 00:30:05.325 "params": { 00:30:05.325 "name": "Nvme$subsystem", 00:30:05.325 "trtype": "$TEST_TRANSPORT", 00:30:05.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.325 "adrfam": "ipv4", 00:30:05.325 "trsvcid": "$NVMF_PORT", 00:30:05.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.325 "hdgst": ${hdgst:-false}, 00:30:05.325 "ddgst": ${ddgst:-false} 00:30:05.325 }, 00:30:05.325 "method": "bdev_nvme_attach_controller" 00:30:05.325 } 00:30:05.325 EOF 00:30:05.325 )") 00:30:05.325 10:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:05.325 10:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:05.325 10:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:05.325 10:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:05.325 "params": { 00:30:05.325 "name": "Nvme0", 00:30:05.325 "trtype": "tcp", 00:30:05.325 "traddr": "10.0.0.2", 00:30:05.325 "adrfam": "ipv4", 00:30:05.325 "trsvcid": "4420", 00:30:05.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:05.325 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:05.325 "hdgst": false, 00:30:05.325 "ddgst": false 00:30:05.325 }, 00:30:05.325 "method": "bdev_nvme_attach_controller" 00:30:05.325 }' 00:30:05.325 [2024-12-11 10:08:14.791039] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:30:05.325 [2024-12-11 10:08:14.791090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276436 ] 00:30:05.325 [2024-12-11 10:08:14.874356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.584 [2024-12-11 10:08:14.912938] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.584 Running I/O for 1 seconds... 00:30:06.960 1995.00 IOPS, 124.69 MiB/s 00:30:06.960 Latency(us) 00:30:06.960 [2024-12-11T09:08:16.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.961 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:06.961 Verification LBA range: start 0x0 length 0x400 00:30:06.961 Nvme0n1 : 1.01 2048.12 128.01 0.00 0.00 30674.65 1365.33 27213.04 00:30:06.961 [2024-12-11T09:08:16.536Z] =================================================================================================================== 00:30:06.961 [2024-12-11T09:08:16.536Z] Total : 2048.12 128.01 0.00 0.00 30674.65 1365.33 27213.04 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:06.961 rmmod nvme_tcp 00:30:06.961 rmmod nvme_fabrics 00:30:06.961 rmmod nvme_keyring 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 275825 ']' 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 275825 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 275825 ']' 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 275825 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275825 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275825' 00:30:06.961 killing process with pid 275825 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 275825 00:30:06.961 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 275825 00:30:07.220 [2024-12-11 10:08:16.595804] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:07.220 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:07.220 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:07.220 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:07.220 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:07.220 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:30:07.220 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:07.220 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:30:07.220 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:07.220 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:07.220 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.220 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.220 10:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.125 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:09.125 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:09.125 00:30:09.125 real 0m13.900s 00:30:09.125 user 0m19.226s 00:30:09.125 sys 0m6.958s 00:30:09.125 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:09.125 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:09.125 ************************************ 00:30:09.125 END TEST nvmf_host_management 00:30:09.125 ************************************ 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:09.384 ************************************ 00:30:09.384 START TEST nvmf_lvol 00:30:09.384 ************************************ 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:09.384 * Looking for test storage... 00:30:09.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:09.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.384 --rc genhtml_branch_coverage=1 00:30:09.384 --rc genhtml_function_coverage=1 00:30:09.384 --rc genhtml_legend=1 00:30:09.384 --rc geninfo_all_blocks=1 00:30:09.384 --rc geninfo_unexecuted_blocks=1 00:30:09.384 00:30:09.384 ' 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:09.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.384 --rc genhtml_branch_coverage=1 00:30:09.384 --rc genhtml_function_coverage=1 00:30:09.384 --rc genhtml_legend=1 00:30:09.384 --rc geninfo_all_blocks=1 00:30:09.384 --rc geninfo_unexecuted_blocks=1 00:30:09.384 00:30:09.384 ' 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:09.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.384 --rc genhtml_branch_coverage=1 00:30:09.384 --rc genhtml_function_coverage=1 00:30:09.384 --rc genhtml_legend=1 00:30:09.384 --rc geninfo_all_blocks=1 00:30:09.384 --rc geninfo_unexecuted_blocks=1 00:30:09.384 00:30:09.384 ' 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:09.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.384 --rc genhtml_branch_coverage=1 00:30:09.384 --rc genhtml_function_coverage=1 00:30:09.384 --rc genhtml_legend=1 00:30:09.384 --rc geninfo_all_blocks=1 00:30:09.384 --rc geninfo_unexecuted_blocks=1 00:30:09.384 00:30:09.384 ' 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.384 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:09.644 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:16.215 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.215 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:16.215 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:16.215 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:16.215 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:16.215 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:16.215 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:16.215 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:16.215 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:16.215 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:16.215 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:16.215 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:16.215 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:16.215 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:16.215 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:16.215 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.215 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.215 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:16.216 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:16.216 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:16.216 Found net devices under 0000:af:00.0: cvl_0_0 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:16.216 Found net devices under 0000:af:00.1: cvl_0_1 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:16.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:30:16.216 00:30:16.216 --- 10.0.0.2 ping statistics --- 00:30:16.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.216 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:16.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:30:16.216 00:30:16.216 --- 10.0.0.1 ping statistics --- 00:30:16.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.216 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=280536 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 280536 00:30:16.216 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:16.217 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 280536 ']' 00:30:16.217 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.217 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:16.217 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.217 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:16.217 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:16.217 [2024-12-11 10:08:25.731917] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:16.217 [2024-12-11 10:08:25.732828] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:30:16.217 [2024-12-11 10:08:25.732862] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.476 [2024-12-11 10:08:25.816476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:16.476 [2024-12-11 10:08:25.856758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.476 [2024-12-11 10:08:25.856793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.476 [2024-12-11 10:08:25.856800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.476 [2024-12-11 10:08:25.856809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.476 [2024-12-11 10:08:25.856819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.476 [2024-12-11 10:08:25.858013] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.476 [2024-12-11 10:08:25.858119] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.476 [2024-12-11 10:08:25.858121] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.476 [2024-12-11 10:08:25.925780] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:16.476 [2024-12-11 10:08:25.926526] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:16.476 [2024-12-11 10:08:25.926932] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:16.476 [2024-12-11 10:08:25.927030] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:16.476 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:16.476 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:30:16.476 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:16.476 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:16.476 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:16.476 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:16.476 10:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:16.734 [2024-12-11 10:08:26.166813] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.734 10:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:16.993 10:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:16.993 10:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:17.252 10:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:17.252 10:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:17.510 10:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:17.510 10:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a6da7a97-dd48-4dde-a47e-2b707b996b89 00:30:17.510 10:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a6da7a97-dd48-4dde-a47e-2b707b996b89 lvol 20 00:30:17.769 10:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a9c13baa-6073-4c64-9d46-9b042c31e7e9 00:30:17.769 10:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:18.034 10:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a9c13baa-6073-4c64-9d46-9b042c31e7e9 00:30:18.034 10:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:18.342 [2024-12-11 10:08:27.786709] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.342 10:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:18.601 10:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=281021 00:30:18.601 10:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:18.601 10:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:19.536 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a9c13baa-6073-4c64-9d46-9b042c31e7e9 MY_SNAPSHOT 00:30:19.795 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ac1350ad-39d3-475c-b009-ba715dc873d6 00:30:19.795 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a9c13baa-6073-4c64-9d46-9b042c31e7e9 30 00:30:20.054 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ac1350ad-39d3-475c-b009-ba715dc873d6 MY_CLONE 00:30:20.314 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=37ed1c18-44d0-4eb2-b6eb-ff7e44c1c328 00:30:20.314 10:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 37ed1c18-44d0-4eb2-b6eb-ff7e44c1c328 00:30:20.881 10:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 281021 00:30:28.997 Initializing NVMe Controllers 00:30:28.997 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:28.997 Controller IO queue size 128, less than required. 00:30:28.997 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:28.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:28.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:28.997 Initialization complete. Launching workers. 00:30:28.997 ======================================================== 00:30:28.997 Latency(us) 00:30:28.997 Device Information : IOPS MiB/s Average min max 00:30:28.997 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12606.20 49.24 10154.18 424.88 82526.09 00:30:28.997 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12456.30 48.66 10276.22 5681.36 44451.11 00:30:28.997 ======================================================== 00:30:28.997 Total : 25062.50 97.90 10214.84 424.88 82526.09 00:30:28.997 00:30:28.997 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:28.997 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a9c13baa-6073-4c64-9d46-9b042c31e7e9 00:30:29.256 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a6da7a97-dd48-4dde-a47e-2b707b996b89 00:30:29.515 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:29.515 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:29.515 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:29.515 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:29.515 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:29.515 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:29.515 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:29.515 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:29.515 10:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:29.515 rmmod nvme_tcp 00:30:29.515 rmmod nvme_fabrics 00:30:29.515 rmmod nvme_keyring 00:30:29.515 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:29.515 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:29.515 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:29.515 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 280536 ']' 00:30:29.515 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 280536 00:30:29.515 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 280536 ']' 00:30:29.515 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 280536 00:30:29.515 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:29.515 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:29.515 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280536 00:30:29.515 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:29.515 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:29.515 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280536' 00:30:29.515 killing process with pid 280536 00:30:29.515 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 280536 00:30:29.515 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 280536 00:30:29.774 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:29.774 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:29.774 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:29.774 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:29.774 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:29.774 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:29.774 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:29.774 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:29.774 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:29.774 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.774 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.774 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:32.316 00:30:32.316 real 0m22.599s 00:30:32.316 user 0m55.590s 00:30:32.316 sys 0m10.479s 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:32.316 ************************************ 00:30:32.316 END TEST nvmf_lvol 00:30:32.316 ************************************ 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:32.316 ************************************ 00:30:32.316 START TEST nvmf_lvs_grow 00:30:32.316 ************************************ 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:32.316 * Looking for test storage... 00:30:32.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:32.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.316 --rc genhtml_branch_coverage=1 00:30:32.316 --rc genhtml_function_coverage=1 00:30:32.316 --rc genhtml_legend=1 00:30:32.316 --rc geninfo_all_blocks=1 00:30:32.316 --rc geninfo_unexecuted_blocks=1 00:30:32.316 00:30:32.316 ' 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:32.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.316 --rc genhtml_branch_coverage=1 00:30:32.316 --rc genhtml_function_coverage=1 00:30:32.316 --rc genhtml_legend=1 00:30:32.316 --rc geninfo_all_blocks=1 00:30:32.316 --rc geninfo_unexecuted_blocks=1 00:30:32.316 00:30:32.316 ' 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:32.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.316 --rc genhtml_branch_coverage=1 00:30:32.316 --rc genhtml_function_coverage=1 00:30:32.316 --rc genhtml_legend=1 00:30:32.316 --rc geninfo_all_blocks=1 00:30:32.316 --rc geninfo_unexecuted_blocks=1 00:30:32.316 00:30:32.316 ' 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:32.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.316 --rc genhtml_branch_coverage=1 00:30:32.316 --rc genhtml_function_coverage=1 00:30:32.316 --rc genhtml_legend=1 00:30:32.316 --rc geninfo_all_blocks=1 00:30:32.316 --rc geninfo_unexecuted_blocks=1 00:30:32.316 00:30:32.316 ' 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:32.316 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:32.317 10:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:38.890 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:38.890 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:38.890 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:38.890 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:38.890 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:38.890 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:38.890 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:38.890 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:38.890 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:38.890 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:38.890 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:38.890 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:38.890 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:38.890 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:38.890 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:38.890 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:38.891 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:38.891 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:38.891 Found net devices under 0000:af:00.0: cvl_0_0 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:38.891 Found net devices under 0000:af:00.1: cvl_0_1 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:38.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:38.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:30:38.891 00:30:38.891 --- 10.0.0.2 ping statistics --- 00:30:38.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.891 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:38.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:38.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:30:38.891 00:30:38.891 --- 10.0.0.1 ping statistics --- 00:30:38.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.891 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:38.891 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:38.892 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:38.892 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:38.892 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=286614 00:30:38.892 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:38.892 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 286614 00:30:38.892 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 286614 ']' 00:30:38.892 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.892 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:38.892 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.892 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:38.892 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:38.892 [2024-12-11 10:08:48.429024] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:38.892 [2024-12-11 10:08:48.429942] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:30:38.892 [2024-12-11 10:08:48.429976] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.152 [2024-12-11 10:08:48.514354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.152 [2024-12-11 10:08:48.554037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.152 [2024-12-11 10:08:48.554074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.152 [2024-12-11 10:08:48.554081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.152 [2024-12-11 10:08:48.554087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.152 [2024-12-11 10:08:48.554092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.152 [2024-12-11 10:08:48.554594] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.152 [2024-12-11 10:08:48.622169] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:39.152 [2024-12-11 10:08:48.622384] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:39.721 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.721 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:39.721 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:39.721 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:39.721 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:39.981 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:39.981 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:39.981 [2024-12-11 10:08:49.471256] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.981 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:39.981 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:39.981 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:39.981 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:39.981 ************************************ 00:30:39.981 START TEST lvs_grow_clean 00:30:39.981 ************************************ 00:30:39.981 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:39.981 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:39.981 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:39.981 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:39.981 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:39.981 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:39.981 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:39.981 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:39.981 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:39.981 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:40.240 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:40.240 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:40.500 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8872ed3d-895b-459e-b003-17c864dd3aa8 00:30:40.500 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8872ed3d-895b-459e-b003-17c864dd3aa8 00:30:40.500 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:40.759 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:40.759 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:40.759 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8872ed3d-895b-459e-b003-17c864dd3aa8 lvol 150 00:30:41.018 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=333f46ba-6125-4415-ac7f-c0b3a8041674 00:30:41.018 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:41.018 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:41.018 [2024-12-11 10:08:50.510982] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:41.018 [2024-12-11 10:08:50.511111] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:41.018 true 00:30:41.018 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8872ed3d-895b-459e-b003-17c864dd3aa8 00:30:41.018 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:41.277 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:41.277 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:41.536 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 333f46ba-6125-4415-ac7f-c0b3a8041674 00:30:41.795 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:41.795 [2024-12-11 10:08:51.291456] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.795 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:42.055 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=287111 00:30:42.055 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:42.055 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:42.055 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 287111 /var/tmp/bdevperf.sock 00:30:42.055 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 287111 ']' 00:30:42.055 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:42.055 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:42.055 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:42.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:42.055 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:42.055 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:42.055 [2024-12-11 10:08:51.546193] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:30:42.055 [2024-12-11 10:08:51.546250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid287111 ] 00:30:42.055 [2024-12-11 10:08:51.624099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.314 [2024-12-11 10:08:51.664923] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.314 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:42.314 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:42.314 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:42.573 Nvme0n1 00:30:42.573 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:42.832 [ 00:30:42.832 { 00:30:42.832 "name": "Nvme0n1", 00:30:42.832 "aliases": [ 00:30:42.832 "333f46ba-6125-4415-ac7f-c0b3a8041674" 00:30:42.832 ], 00:30:42.832 "product_name": "NVMe disk", 00:30:42.832 "block_size": 4096, 00:30:42.832 "num_blocks": 38912, 00:30:42.832 "uuid": "333f46ba-6125-4415-ac7f-c0b3a8041674", 00:30:42.832 "numa_id": 1, 00:30:42.832 "assigned_rate_limits": { 00:30:42.832 "rw_ios_per_sec": 0, 00:30:42.832 "rw_mbytes_per_sec": 0, 00:30:42.832 "r_mbytes_per_sec": 0, 00:30:42.832 "w_mbytes_per_sec": 0 00:30:42.832 }, 00:30:42.832 "claimed": false, 00:30:42.832 "zoned": false, 00:30:42.832 "supported_io_types": { 00:30:42.832 "read": true, 00:30:42.832 "write": true, 00:30:42.832 "unmap": true, 00:30:42.832 "flush": true, 00:30:42.832 "reset": true, 00:30:42.832 "nvme_admin": true, 00:30:42.832 "nvme_io": true, 00:30:42.832 "nvme_io_md": false, 00:30:42.832 "write_zeroes": true, 00:30:42.832 "zcopy": false, 00:30:42.832 "get_zone_info": false, 00:30:42.832 "zone_management": false, 00:30:42.832 "zone_append": false, 00:30:42.832 "compare": true, 00:30:42.832 "compare_and_write": true, 00:30:42.832 "abort": true, 00:30:42.832 "seek_hole": false, 00:30:42.832 "seek_data": false, 00:30:42.832 "copy": true, 00:30:42.832 "nvme_iov_md": false 00:30:42.832 }, 00:30:42.832 "memory_domains": [ 00:30:42.832 { 00:30:42.832 "dma_device_id": "system", 00:30:42.832 "dma_device_type": 1 00:30:42.832 } 00:30:42.832 ], 00:30:42.832 "driver_specific": { 00:30:42.832 "nvme": [ 00:30:42.832 { 00:30:42.832 "trid": { 00:30:42.832 "trtype": "TCP", 00:30:42.832 "adrfam": "IPv4", 00:30:42.832 "traddr": "10.0.0.2", 00:30:42.832 "trsvcid": "4420", 00:30:42.832 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:42.832 }, 00:30:42.832 "ctrlr_data": { 00:30:42.832 "cntlid": 1, 00:30:42.832 "vendor_id": "0x8086", 00:30:42.832 "model_number": "SPDK bdev Controller", 00:30:42.832 "serial_number": "SPDK0", 00:30:42.832 "firmware_revision": "25.01", 00:30:42.832 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:42.832 "oacs": { 00:30:42.832 "security": 0, 00:30:42.832 "format": 0, 00:30:42.832 "firmware": 0, 00:30:42.832 "ns_manage": 0 00:30:42.832 }, 00:30:42.832 "multi_ctrlr": true, 00:30:42.832 "ana_reporting": false 00:30:42.832 }, 00:30:42.832 "vs": { 00:30:42.832 "nvme_version": "1.3" 00:30:42.832 }, 00:30:42.832 "ns_data": { 00:30:42.832 "id": 1, 00:30:42.832 "can_share": true 00:30:42.832 } 00:30:42.832 } 00:30:42.832 ], 00:30:42.832 "mp_policy": "active_passive" 00:30:42.832 } 00:30:42.832 } 00:30:42.832 ] 00:30:42.832 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=287325 00:30:42.833 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:42.833 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:43.092 Running I/O for 10 seconds... 00:30:44.030 Latency(us) 00:30:44.030 [2024-12-11T09:08:53.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:44.030 Nvme0n1 : 1.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:30:44.030 [2024-12-11T09:08:53.605Z] =================================================================================================================== 00:30:44.030 [2024-12-11T09:08:53.605Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:30:44.030 00:30:44.969 10:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8872ed3d-895b-459e-b003-17c864dd3aa8 00:30:44.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:44.969 Nvme0n1 : 2.00 23177.50 90.54 0.00 0.00 0.00 0.00 0.00 00:30:44.969 [2024-12-11T09:08:54.544Z] =================================================================================================================== 00:30:44.969 [2024-12-11T09:08:54.544Z] Total : 23177.50 90.54 0.00 0.00 0.00 0.00 0.00 00:30:44.969 00:30:44.969 true 00:30:44.969 10:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8872ed3d-895b-459e-b003-17c864dd3aa8 00:30:44.969 10:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:45.228 10:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:45.228 10:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:45.228 10:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 287325 00:30:46.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:46.169 Nvme0n1 : 3.00 23177.67 90.54 0.00 0.00 0.00 0.00 0.00 00:30:46.169 [2024-12-11T09:08:55.744Z] =================================================================================================================== 00:30:46.169 [2024-12-11T09:08:55.744Z] Total : 23177.67 90.54 0.00 0.00 0.00 0.00 0.00 00:30:46.169 00:30:47.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:47.107 Nvme0n1 : 4.00 23285.00 90.96 0.00 0.00 0.00 0.00 0.00 00:30:47.107 [2024-12-11T09:08:56.682Z] =================================================================================================================== 00:30:47.107 [2024-12-11T09:08:56.682Z] Total : 23285.00 90.96 0.00 0.00 0.00 0.00 0.00 00:30:47.107 00:30:48.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:48.045 Nvme0n1 : 5.00 23377.80 91.32 0.00 0.00 0.00 0.00 0.00 00:30:48.045 [2024-12-11T09:08:57.620Z] =================================================================================================================== 00:30:48.045 [2024-12-11T09:08:57.620Z] Total : 23377.80 91.32 0.00 0.00 0.00 0.00 0.00 00:30:48.045 00:30:48.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:48.983 Nvme0n1 : 6.00 23429.17 91.52 0.00 0.00 0.00 0.00 0.00 00:30:48.983 [2024-12-11T09:08:58.558Z] =================================================================================================================== 00:30:48.983 [2024-12-11T09:08:58.558Z] Total : 23429.17 91.52 0.00 0.00 0.00 0.00 0.00 00:30:48.983 00:30:49.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:49.921 Nvme0n1 : 7.00 23436.43 91.55 0.00 0.00 0.00 0.00 0.00 00:30:49.921 [2024-12-11T09:08:59.496Z] =================================================================================================================== 00:30:49.921 [2024-12-11T09:08:59.496Z] Total : 23436.43 91.55 0.00 0.00 0.00 0.00 0.00 00:30:49.921 00:30:51.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:51.299 Nvme0n1 : 8.00 23427.88 91.52 0.00 0.00 0.00 0.00 0.00 00:30:51.299 [2024-12-11T09:09:00.874Z] =================================================================================================================== 00:30:51.299 [2024-12-11T09:09:00.874Z] Total : 23427.88 91.52 0.00 0.00 0.00 0.00 0.00 00:30:51.299 00:30:52.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:52.236 Nvme0n1 : 9.00 23407.11 91.43 0.00 0.00 0.00 0.00 0.00 00:30:52.236 [2024-12-11T09:09:01.811Z] =================================================================================================================== 00:30:52.236 [2024-12-11T09:09:01.811Z] Total : 23407.11 91.43 0.00 0.00 0.00 0.00 0.00 00:30:52.236 00:30:53.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:53.174 Nvme0n1 : 10.00 23435.00 91.54 0.00 0.00 0.00 0.00 0.00 00:30:53.174 [2024-12-11T09:09:02.749Z] =================================================================================================================== 00:30:53.174 [2024-12-11T09:09:02.749Z] Total : 23435.00 91.54 0.00 0.00 0.00 0.00 0.00 00:30:53.174 00:30:53.174 00:30:53.174 Latency(us) 00:30:53.174 [2024-12-11T09:09:02.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:53.174 Nvme0n1 : 10.00 23434.82 91.54 0.00 0.00 5458.70 3136.37 25839.91 00:30:53.174 [2024-12-11T09:09:02.749Z] =================================================================================================================== 00:30:53.174 [2024-12-11T09:09:02.749Z] Total : 23434.82 91.54 0.00 0.00 5458.70 3136.37 25839.91 00:30:53.174 { 00:30:53.174 "results": [ 00:30:53.174 { 00:30:53.174 "job": "Nvme0n1", 00:30:53.174 "core_mask": "0x2", 00:30:53.174 "workload": "randwrite", 00:30:53.174 "status": "finished", 00:30:53.174 "queue_depth": 128, 00:30:53.174 "io_size": 4096, 00:30:53.174 "runtime": 10.002809, 00:30:53.174 "iops": 23434.817159859795, 00:30:53.174 "mibps": 91.54225453070232, 00:30:53.174 "io_failed": 0, 00:30:53.174 "io_timeout": 0, 00:30:53.174 "avg_latency_us": 5458.700214606068, 00:30:53.174 "min_latency_us": 3136.365714285714, 00:30:53.174 "max_latency_us": 25839.908571428572 00:30:53.174 } 00:30:53.174 ], 00:30:53.174 "core_count": 1 00:30:53.174 } 00:30:53.174 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 287111 00:30:53.174 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 287111 ']' 00:30:53.174 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 287111 00:30:53.174 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:53.174 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:53.174 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 287111 00:30:53.174 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:53.174 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:53.174 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 287111' 00:30:53.174 killing process with pid 287111 00:30:53.174 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 287111 00:30:53.174 Received shutdown signal, test time was about 10.000000 seconds 00:30:53.174 00:30:53.174 Latency(us) 00:30:53.174 [2024-12-11T09:09:02.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.174 [2024-12-11T09:09:02.749Z] =================================================================================================================== 00:30:53.174 [2024-12-11T09:09:02.749Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:53.174 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 287111 00:30:53.174 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:53.434 10:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:53.693 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8872ed3d-895b-459e-b003-17c864dd3aa8 00:30:53.693 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:53.953 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:53.953 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:53.953 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:53.953 [2024-12-11 10:09:03.451047] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:53.953 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8872ed3d-895b-459e-b003-17c864dd3aa8 00:30:53.953 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:53.953 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8872ed3d-895b-459e-b003-17c864dd3aa8 00:30:53.953 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:53.953 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.953 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:53.953 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.953 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:53.953 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.953 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:53.953 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:53.953 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8872ed3d-895b-459e-b003-17c864dd3aa8 00:30:54.212 request: 00:30:54.212 { 00:30:54.212 "uuid": "8872ed3d-895b-459e-b003-17c864dd3aa8", 00:30:54.212 "method": "bdev_lvol_get_lvstores", 00:30:54.212 "req_id": 1 00:30:54.212 } 00:30:54.212 Got JSON-RPC error response 00:30:54.212 response: 00:30:54.212 { 00:30:54.212 "code": -19, 00:30:54.212 "message": "No such device" 00:30:54.212 } 00:30:54.212 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:54.212 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:54.212 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:54.212 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:54.212 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:54.471 aio_bdev 00:30:54.471 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 333f46ba-6125-4415-ac7f-c0b3a8041674 00:30:54.471 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=333f46ba-6125-4415-ac7f-c0b3a8041674 00:30:54.471 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:54.471 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:54.471 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:54.471 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:54.471 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:54.731 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 333f46ba-6125-4415-ac7f-c0b3a8041674 -t 2000 00:30:54.731 [ 00:30:54.731 { 00:30:54.731 "name": "333f46ba-6125-4415-ac7f-c0b3a8041674", 00:30:54.731 "aliases": [ 00:30:54.731 "lvs/lvol" 00:30:54.731 ], 00:30:54.731 "product_name": "Logical Volume", 00:30:54.731 "block_size": 4096, 00:30:54.731 "num_blocks": 38912, 00:30:54.731 "uuid": "333f46ba-6125-4415-ac7f-c0b3a8041674", 00:30:54.731 "assigned_rate_limits": { 00:30:54.731 "rw_ios_per_sec": 0, 00:30:54.731 "rw_mbytes_per_sec": 0, 00:30:54.731 "r_mbytes_per_sec": 0, 00:30:54.731 "w_mbytes_per_sec": 0 00:30:54.731 }, 00:30:54.731 "claimed": false, 00:30:54.731 "zoned": false, 00:30:54.731 "supported_io_types": { 00:30:54.731 "read": true, 00:30:54.731 "write": true, 00:30:54.731 "unmap": true, 00:30:54.731 "flush": false, 00:30:54.731 "reset": true, 00:30:54.731 "nvme_admin": false, 00:30:54.731 "nvme_io": false, 00:30:54.731 "nvme_io_md": false, 00:30:54.731 "write_zeroes": true, 00:30:54.731 "zcopy": false, 00:30:54.731 "get_zone_info": false, 00:30:54.731 "zone_management": false, 00:30:54.731 "zone_append": false, 00:30:54.731 "compare": false, 00:30:54.731 "compare_and_write": false, 00:30:54.731 "abort": false, 00:30:54.731 "seek_hole": true, 00:30:54.731 "seek_data": true, 00:30:54.731 "copy": false, 00:30:54.731 "nvme_iov_md": false 00:30:54.731 }, 00:30:54.731 "driver_specific": { 00:30:54.731 "lvol": { 00:30:54.731 "lvol_store_uuid": "8872ed3d-895b-459e-b003-17c864dd3aa8", 00:30:54.731 "base_bdev": "aio_bdev", 00:30:54.731 "thin_provision": false, 00:30:54.731 "num_allocated_clusters": 38, 00:30:54.731 "snapshot": false, 00:30:54.731 "clone": false, 00:30:54.731 "esnap_clone": false 00:30:54.731 } 00:30:54.731 } 00:30:54.731 } 00:30:54.731 ] 00:30:54.731 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:54.731 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8872ed3d-895b-459e-b003-17c864dd3aa8 00:30:54.731 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:54.990 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:54.990 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8872ed3d-895b-459e-b003-17c864dd3aa8 00:30:54.990 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:55.250 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:55.250 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 333f46ba-6125-4415-ac7f-c0b3a8041674 00:30:55.509 10:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8872ed3d-895b-459e-b003-17c864dd3aa8 00:30:55.509 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:55.768 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:55.768 00:30:55.768 real 0m15.776s 00:30:55.768 user 0m15.276s 00:30:55.768 sys 0m1.512s 00:30:55.768 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:55.768 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:55.768 ************************************ 00:30:55.768 END TEST lvs_grow_clean 00:30:55.768 ************************************ 00:30:55.768 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:55.768 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:55.768 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:55.768 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:56.026 ************************************ 00:30:56.026 START TEST lvs_grow_dirty 00:30:56.026 ************************************ 00:30:56.026 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:56.026 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:56.026 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:56.026 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:56.026 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:56.026 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:56.026 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:56.026 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:56.026 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:56.026 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:56.026 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:56.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:56.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c879859c-da50-4811-a6b1-3d81679dd036 00:30:56.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c879859c-da50-4811-a6b1-3d81679dd036 00:30:56.285 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:56.544 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:56.544 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:56.544 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c879859c-da50-4811-a6b1-3d81679dd036 lvol 150 00:30:56.803 10:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a94c3dba-5bb1-4bb6-b202-f3263eea66ee 00:30:56.803 10:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:56.803 10:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:56.803 [2024-12-11 10:09:06.363012] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:56.803 [2024-12-11 10:09:06.363145] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:56.803 true 00:30:57.061 10:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c879859c-da50-4811-a6b1-3d81679dd036 00:30:57.062 10:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:57.062 10:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:57.062 10:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:57.320 10:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a94c3dba-5bb1-4bb6-b202-f3263eea66ee 00:30:57.579 10:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:57.579 [2024-12-11 10:09:07.127438] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.839 10:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:57.839 10:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=289784 00:30:57.839 10:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:57.839 10:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:57.839 10:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 289784 /var/tmp/bdevperf.sock 00:30:57.839 10:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 289784 ']' 00:30:57.839 10:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:57.839 10:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:57.839 10:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:57.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:57.839 10:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:57.839 10:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:57.839 [2024-12-11 10:09:07.374703] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:30:57.839 [2024-12-11 10:09:07.374751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid289784 ] 00:30:58.098 [2024-12-11 10:09:07.452690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.098 [2024-12-11 10:09:07.492842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.098 10:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:58.098 10:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:58.098 10:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:58.666 Nvme0n1 00:30:58.666 10:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:58.666 [ 00:30:58.666 { 00:30:58.666 "name": "Nvme0n1", 00:30:58.666 "aliases": [ 00:30:58.666 "a94c3dba-5bb1-4bb6-b202-f3263eea66ee" 00:30:58.666 ], 00:30:58.666 "product_name": "NVMe disk", 00:30:58.666 "block_size": 4096, 00:30:58.666 "num_blocks": 38912, 00:30:58.666 "uuid": "a94c3dba-5bb1-4bb6-b202-f3263eea66ee", 00:30:58.666 "numa_id": 1, 00:30:58.666 "assigned_rate_limits": { 00:30:58.666 "rw_ios_per_sec": 0, 00:30:58.666 "rw_mbytes_per_sec": 0, 00:30:58.666 "r_mbytes_per_sec": 0, 00:30:58.666 "w_mbytes_per_sec": 0 00:30:58.666 }, 00:30:58.666 "claimed": false, 00:30:58.666 "zoned": false, 00:30:58.666 "supported_io_types": { 00:30:58.666 "read": true, 00:30:58.666 "write": true, 00:30:58.666 "unmap": true, 00:30:58.666 "flush": true, 00:30:58.666 "reset": true, 00:30:58.666 "nvme_admin": true, 00:30:58.666 "nvme_io": true, 00:30:58.666 "nvme_io_md": false, 00:30:58.666 "write_zeroes": true, 00:30:58.666 "zcopy": false, 00:30:58.666 "get_zone_info": false, 00:30:58.666 "zone_management": false, 00:30:58.666 "zone_append": false, 00:30:58.666 "compare": true, 00:30:58.666 "compare_and_write": true, 00:30:58.666 "abort": true, 00:30:58.666 "seek_hole": false, 00:30:58.666 "seek_data": false, 00:30:58.666 "copy": true, 00:30:58.666 "nvme_iov_md": false 00:30:58.666 }, 00:30:58.666 "memory_domains": [ 00:30:58.666 { 00:30:58.666 "dma_device_id": "system", 00:30:58.666 "dma_device_type": 1 00:30:58.666 } 00:30:58.666 ], 00:30:58.666 "driver_specific": { 00:30:58.666 "nvme": [ 00:30:58.666 { 00:30:58.666 "trid": { 00:30:58.666 "trtype": "TCP", 00:30:58.666 "adrfam": "IPv4", 00:30:58.666 "traddr": "10.0.0.2", 00:30:58.666 "trsvcid": "4420", 00:30:58.666 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:58.666 }, 00:30:58.666 "ctrlr_data": { 00:30:58.666 "cntlid": 1, 00:30:58.666 "vendor_id": "0x8086", 00:30:58.666 "model_number": "SPDK bdev Controller", 00:30:58.666 "serial_number": "SPDK0", 00:30:58.666 "firmware_revision": "25.01", 00:30:58.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:58.666 "oacs": { 00:30:58.666 "security": 0, 00:30:58.666 "format": 0, 00:30:58.666 "firmware": 0, 00:30:58.666 "ns_manage": 0 00:30:58.666 }, 00:30:58.666 "multi_ctrlr": true, 00:30:58.666 "ana_reporting": false 00:30:58.666 }, 00:30:58.666 "vs": { 00:30:58.666 "nvme_version": "1.3" 00:30:58.666 }, 00:30:58.666 "ns_data": { 00:30:58.666 "id": 1, 00:30:58.666 "can_share": true 00:30:58.666 } 00:30:58.666 } 00:30:58.666 ], 00:30:58.666 "mp_policy": "active_passive" 00:30:58.666 } 00:30:58.666 } 00:30:58.666 ] 00:30:58.666 10:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=290020 00:30:58.666 10:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:58.666 10:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:58.925 Running I/O for 10 seconds... 00:30:59.862 Latency(us) 00:30:59.862 [2024-12-11T09:09:09.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:59.862 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:30:59.862 [2024-12-11T09:09:09.437Z] =================================================================================================================== 00:30:59.862 [2024-12-11T09:09:09.437Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:30:59.862 00:31:00.798 10:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c879859c-da50-4811-a6b1-3d81679dd036 00:31:00.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:00.798 Nvme0n1 : 2.00 23050.50 90.04 0.00 0.00 0.00 0.00 0.00 00:31:00.798 [2024-12-11T09:09:10.373Z] =================================================================================================================== 00:31:00.798 [2024-12-11T09:09:10.373Z] Total : 23050.50 90.04 0.00 0.00 0.00 0.00 0.00 00:31:00.798 00:31:00.798 true 00:31:01.056 10:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c879859c-da50-4811-a6b1-3d81679dd036 00:31:01.056 10:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:01.056 10:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:01.056 10:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:01.056 10:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 290020 00:31:01.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:01.993 Nvme0n1 : 3.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:31:01.993 [2024-12-11T09:09:11.568Z] =================================================================================================================== 00:31:01.993 [2024-12-11T09:09:11.568Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:31:01.993 00:31:02.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:02.930 Nvme0n1 : 4.00 23225.25 90.72 0.00 0.00 0.00 0.00 0.00 00:31:02.930 [2024-12-11T09:09:12.505Z] =================================================================================================================== 00:31:02.930 [2024-12-11T09:09:12.505Z] Total : 23225.25 90.72 0.00 0.00 0.00 0.00 0.00 00:31:02.930 00:31:03.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:03.867 Nvme0n1 : 5.00 23324.00 91.11 0.00 0.00 0.00 0.00 0.00 00:31:03.867 [2024-12-11T09:09:13.442Z] =================================================================================================================== 00:31:03.867 [2024-12-11T09:09:13.442Z] Total : 23324.00 91.11 0.00 0.00 0.00 0.00 0.00 00:31:03.867 00:31:04.803 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:04.803 Nvme0n1 : 6.00 23394.83 91.39 0.00 0.00 0.00 0.00 0.00 00:31:04.803 [2024-12-11T09:09:14.378Z] =================================================================================================================== 00:31:04.803 [2024-12-11T09:09:14.378Z] Total : 23394.83 91.39 0.00 0.00 0.00 0.00 0.00 00:31:04.803 00:31:05.739 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:05.739 Nvme0n1 : 7.00 23427.29 91.51 0.00 0.00 0.00 0.00 0.00 00:31:05.739 [2024-12-11T09:09:15.314Z] =================================================================================================================== 00:31:05.739 [2024-12-11T09:09:15.314Z] Total : 23427.29 91.51 0.00 0.00 0.00 0.00 0.00 00:31:05.739 00:31:07.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:07.115 Nvme0n1 : 8.00 23483.38 91.73 0.00 0.00 0.00 0.00 0.00 00:31:07.115 [2024-12-11T09:09:16.690Z] =================================================================================================================== 00:31:07.115 [2024-12-11T09:09:16.690Z] Total : 23483.38 91.73 0.00 0.00 0.00 0.00 0.00 00:31:07.115 00:31:08.051 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:08.052 Nvme0n1 : 9.00 23512.89 91.85 0.00 0.00 0.00 0.00 0.00 00:31:08.052 [2024-12-11T09:09:17.627Z] =================================================================================================================== 00:31:08.052 [2024-12-11T09:09:17.627Z] Total : 23512.89 91.85 0.00 0.00 0.00 0.00 0.00 00:31:08.052 00:31:08.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:08.989 Nvme0n1 : 10.00 23549.20 91.99 0.00 0.00 0.00 0.00 0.00 00:31:08.989 [2024-12-11T09:09:18.564Z] =================================================================================================================== 00:31:08.989 [2024-12-11T09:09:18.564Z] Total : 23549.20 91.99 0.00 0.00 0.00 0.00 0.00 00:31:08.989 00:31:08.989 00:31:08.989 Latency(us) 00:31:08.989 [2024-12-11T09:09:18.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:08.989 Nvme0n1 : 10.01 23550.10 91.99 0.00 0.00 5432.20 3557.67 26838.55 00:31:08.989 [2024-12-11T09:09:18.564Z] =================================================================================================================== 00:31:08.989 [2024-12-11T09:09:18.564Z] Total : 23550.10 91.99 0.00 0.00 5432.20 3557.67 26838.55 00:31:08.989 { 00:31:08.989 "results": [ 00:31:08.989 { 00:31:08.989 "job": "Nvme0n1", 00:31:08.989 "core_mask": "0x2", 00:31:08.989 "workload": "randwrite", 00:31:08.989 "status": "finished", 00:31:08.989 "queue_depth": 128, 00:31:08.989 "io_size": 4096, 00:31:08.989 "runtime": 10.005052, 00:31:08.989 "iops": 23550.10248822295, 00:31:08.989 "mibps": 91.9925878446209, 00:31:08.989 "io_failed": 0, 00:31:08.989 "io_timeout": 0, 00:31:08.989 "avg_latency_us": 5432.204599722717, 00:31:08.989 "min_latency_us": 3557.6685714285713, 00:31:08.989 "max_latency_us": 26838.55238095238 00:31:08.989 } 00:31:08.989 ], 00:31:08.989 "core_count": 1 00:31:08.989 } 00:31:08.989 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 289784 00:31:08.989 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 289784 ']' 00:31:08.989 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 289784 00:31:08.989 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:31:08.989 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:08.989 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 289784 00:31:08.989 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:08.989 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:08.989 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 289784' 00:31:08.989 killing process with pid 289784 00:31:08.989 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 289784 00:31:08.989 Received shutdown signal, test time was about 10.000000 seconds 00:31:08.989 00:31:08.989 Latency(us) 00:31:08.989 [2024-12-11T09:09:18.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.989 [2024-12-11T09:09:18.564Z] =================================================================================================================== 00:31:08.989 [2024-12-11T09:09:18.564Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:08.989 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 289784 00:31:08.989 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:09.248 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:09.509 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c879859c-da50-4811-a6b1-3d81679dd036 00:31:09.509 10:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:09.771 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:09.771 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:09.771 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 286614 00:31:09.771 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 286614 00:31:09.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 286614 Killed "${NVMF_APP[@]}" "$@" 00:31:09.771 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:09.771 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:09.771 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:09.771 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:09.771 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:09.771 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=292000 00:31:09.771 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 292000 00:31:09.771 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:09.771 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 292000 ']' 00:31:09.771 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.771 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:09.771 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.771 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:09.771 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:09.771 [2024-12-11 10:09:19.223457] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:09.771 [2024-12-11 10:09:19.224304] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:31:09.771 [2024-12-11 10:09:19.224340] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:09.771 [2024-12-11 10:09:19.286498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.771 [2024-12-11 10:09:19.323501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:09.771 [2024-12-11 10:09:19.323532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:09.771 [2024-12-11 10:09:19.323539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:09.771 [2024-12-11 10:09:19.323546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:09.771 [2024-12-11 10:09:19.323551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:09.771 [2024-12-11 10:09:19.324077] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.030 [2024-12-11 10:09:19.391182] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:10.030 [2024-12-11 10:09:19.391382] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:10.030 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:10.030 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:10.030 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:10.030 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:10.030 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:10.030 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:10.030 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:10.289 [2024-12-11 10:09:19.641411] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:10.289 [2024-12-11 10:09:19.641621] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:10.289 [2024-12-11 10:09:19.641704] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:10.289 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:10.289 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a94c3dba-5bb1-4bb6-b202-f3263eea66ee 00:31:10.289 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a94c3dba-5bb1-4bb6-b202-f3263eea66ee 00:31:10.289 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:10.289 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:10.289 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:10.289 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:10.289 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:10.289 10:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a94c3dba-5bb1-4bb6-b202-f3263eea66ee -t 2000 00:31:10.547 [ 00:31:10.547 { 00:31:10.547 "name": "a94c3dba-5bb1-4bb6-b202-f3263eea66ee", 00:31:10.547 "aliases": [ 00:31:10.547 "lvs/lvol" 00:31:10.547 ], 00:31:10.547 "product_name": "Logical Volume", 00:31:10.547 "block_size": 4096, 00:31:10.547 "num_blocks": 38912, 00:31:10.547 "uuid": "a94c3dba-5bb1-4bb6-b202-f3263eea66ee", 00:31:10.547 "assigned_rate_limits": { 00:31:10.547 "rw_ios_per_sec": 0, 00:31:10.547 "rw_mbytes_per_sec": 0, 00:31:10.547 "r_mbytes_per_sec": 0, 00:31:10.547 "w_mbytes_per_sec": 0 00:31:10.547 }, 00:31:10.547 "claimed": false, 00:31:10.547 "zoned": false, 00:31:10.547 "supported_io_types": { 00:31:10.547 "read": true, 00:31:10.547 "write": true, 00:31:10.547 "unmap": true, 00:31:10.547 "flush": false, 00:31:10.547 "reset": true, 00:31:10.547 "nvme_admin": false, 00:31:10.547 "nvme_io": false, 00:31:10.547 "nvme_io_md": false, 00:31:10.547 "write_zeroes": true, 00:31:10.547 "zcopy": false, 00:31:10.547 "get_zone_info": false, 00:31:10.547 "zone_management": false, 00:31:10.547 "zone_append": false, 00:31:10.547 "compare": false, 00:31:10.547 "compare_and_write": false, 00:31:10.547 "abort": false, 00:31:10.547 "seek_hole": true, 00:31:10.547 "seek_data": true, 00:31:10.547 "copy": false, 00:31:10.547 "nvme_iov_md": false 00:31:10.547 }, 00:31:10.547 "driver_specific": { 00:31:10.547 "lvol": { 00:31:10.547 "lvol_store_uuid": "c879859c-da50-4811-a6b1-3d81679dd036", 00:31:10.547 "base_bdev": "aio_bdev", 00:31:10.547 "thin_provision": false, 00:31:10.547 "num_allocated_clusters": 38, 00:31:10.547 "snapshot": false, 00:31:10.547 "clone": false, 00:31:10.547 "esnap_clone": false 00:31:10.547 } 00:31:10.547 } 00:31:10.547 } 00:31:10.547 ] 00:31:10.547 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:10.547 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c879859c-da50-4811-a6b1-3d81679dd036 00:31:10.547 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:10.806 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:10.806 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c879859c-da50-4811-a6b1-3d81679dd036 00:31:10.806 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:11.065 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:11.065 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:11.065 [2024-12-11 10:09:20.596539] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:11.065 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c879859c-da50-4811-a6b1-3d81679dd036 00:31:11.065 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:31:11.065 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c879859c-da50-4811-a6b1-3d81679dd036 00:31:11.065 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:11.065 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:11.065 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:11.065 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:11.065 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:11.324 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:11.324 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:11.324 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:11.324 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c879859c-da50-4811-a6b1-3d81679dd036 00:31:11.324 request: 00:31:11.324 { 00:31:11.324 "uuid": "c879859c-da50-4811-a6b1-3d81679dd036", 00:31:11.324 "method": "bdev_lvol_get_lvstores", 00:31:11.324 "req_id": 1 00:31:11.324 } 00:31:11.324 Got JSON-RPC error response 00:31:11.324 response: 00:31:11.324 { 00:31:11.324 "code": -19, 00:31:11.324 "message": "No such device" 00:31:11.324 } 00:31:11.324 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:31:11.324 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:11.324 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:11.324 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:11.324 10:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:11.583 aio_bdev 00:31:11.583 10:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a94c3dba-5bb1-4bb6-b202-f3263eea66ee 00:31:11.583 10:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a94c3dba-5bb1-4bb6-b202-f3263eea66ee 00:31:11.583 10:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:11.583 10:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:11.583 10:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:11.583 10:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:11.583 10:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:11.842 10:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a94c3dba-5bb1-4bb6-b202-f3263eea66ee -t 2000 00:31:11.842 [ 00:31:11.842 { 00:31:11.842 "name": "a94c3dba-5bb1-4bb6-b202-f3263eea66ee", 00:31:11.842 "aliases": [ 00:31:11.842 "lvs/lvol" 00:31:11.842 ], 00:31:11.842 "product_name": "Logical Volume", 00:31:11.842 "block_size": 4096, 00:31:11.842 "num_blocks": 38912, 00:31:11.842 "uuid": "a94c3dba-5bb1-4bb6-b202-f3263eea66ee", 00:31:11.842 "assigned_rate_limits": { 00:31:11.842 "rw_ios_per_sec": 0, 00:31:11.842 "rw_mbytes_per_sec": 0, 00:31:11.842 "r_mbytes_per_sec": 0, 00:31:11.842 "w_mbytes_per_sec": 0 00:31:11.842 }, 00:31:11.842 "claimed": false, 00:31:11.842 "zoned": false, 00:31:11.842 "supported_io_types": { 00:31:11.842 "read": true, 00:31:11.842 "write": true, 00:31:11.842 "unmap": true, 00:31:11.842 "flush": false, 00:31:11.842 "reset": true, 00:31:11.842 "nvme_admin": false, 00:31:11.842 "nvme_io": false, 00:31:11.842 "nvme_io_md": false, 00:31:11.842 "write_zeroes": true, 00:31:11.842 "zcopy": false, 00:31:11.842 "get_zone_info": false, 00:31:11.842 "zone_management": false, 00:31:11.842 "zone_append": false, 00:31:11.842 "compare": false, 00:31:11.842 "compare_and_write": false, 00:31:11.842 "abort": false, 00:31:11.842 "seek_hole": true, 00:31:11.842 "seek_data": true, 00:31:11.842 "copy": false, 00:31:11.842 "nvme_iov_md": false 00:31:11.842 }, 00:31:11.842 "driver_specific": { 00:31:11.842 "lvol": { 00:31:11.842 "lvol_store_uuid": "c879859c-da50-4811-a6b1-3d81679dd036", 00:31:11.842 "base_bdev": "aio_bdev", 00:31:11.842 "thin_provision": false, 00:31:11.842 "num_allocated_clusters": 38, 00:31:11.842 "snapshot": false, 00:31:11.842 "clone": false, 00:31:11.842 "esnap_clone": false 00:31:11.842 } 00:31:11.842 } 00:31:11.842 } 00:31:11.842 ] 00:31:12.101 10:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:12.101 10:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c879859c-da50-4811-a6b1-3d81679dd036 00:31:12.101 10:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:12.101 10:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:12.101 10:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c879859c-da50-4811-a6b1-3d81679dd036 00:31:12.101 10:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:12.360 10:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:12.360 10:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a94c3dba-5bb1-4bb6-b202-f3263eea66ee 00:31:12.619 10:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c879859c-da50-4811-a6b1-3d81679dd036 00:31:12.878 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:12.878 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:12.878 00:31:12.878 real 0m17.057s 00:31:12.878 user 0m34.412s 00:31:12.878 sys 0m3.925s 00:31:12.878 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:12.878 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:12.878 ************************************ 00:31:12.878 END TEST lvs_grow_dirty 00:31:12.878 ************************************ 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:13.137 nvmf_trace.0 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:13.137 rmmod nvme_tcp 00:31:13.137 rmmod nvme_fabrics 00:31:13.137 rmmod nvme_keyring 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 292000 ']' 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 292000 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 292000 ']' 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 292000 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 292000 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 292000' 00:31:13.137 killing process with pid 292000 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 292000 00:31:13.137 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 292000 00:31:13.397 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:13.397 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:13.397 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:13.397 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:13.397 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:31:13.397 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:13.397 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:31:13.397 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:13.397 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:13.397 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.397 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.397 10:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.933 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:15.933 00:31:15.933 real 0m43.473s 00:31:15.933 user 0m52.515s 00:31:15.933 sys 0m11.008s 00:31:15.933 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:15.933 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:15.933 ************************************ 00:31:15.933 END TEST nvmf_lvs_grow 00:31:15.933 ************************************ 00:31:15.933 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:15.933 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:15.933 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:15.933 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:15.933 ************************************ 00:31:15.933 START TEST nvmf_bdev_io_wait 00:31:15.933 ************************************ 00:31:15.933 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:15.933 * Looking for test storage... 00:31:15.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:15.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.933 --rc genhtml_branch_coverage=1 00:31:15.933 --rc genhtml_function_coverage=1 00:31:15.933 --rc genhtml_legend=1 00:31:15.933 --rc geninfo_all_blocks=1 00:31:15.933 --rc geninfo_unexecuted_blocks=1 00:31:15.933 00:31:15.933 ' 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:15.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.933 --rc genhtml_branch_coverage=1 00:31:15.933 --rc genhtml_function_coverage=1 00:31:15.933 --rc genhtml_legend=1 00:31:15.933 --rc geninfo_all_blocks=1 00:31:15.933 --rc geninfo_unexecuted_blocks=1 00:31:15.933 00:31:15.933 ' 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:15.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.933 --rc genhtml_branch_coverage=1 00:31:15.933 --rc genhtml_function_coverage=1 00:31:15.933 --rc genhtml_legend=1 00:31:15.933 --rc geninfo_all_blocks=1 00:31:15.933 --rc geninfo_unexecuted_blocks=1 00:31:15.933 00:31:15.933 ' 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:15.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.933 --rc genhtml_branch_coverage=1 00:31:15.933 --rc genhtml_function_coverage=1 00:31:15.933 --rc genhtml_legend=1 00:31:15.933 --rc geninfo_all_blocks=1 00:31:15.933 --rc geninfo_unexecuted_blocks=1 00:31:15.933 00:31:15.933 ' 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:15.933 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.934 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:15.934 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:15.934 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:15.934 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.934 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.934 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.934 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:15.934 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:15.934 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:15.934 10:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.504 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:22.504 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:22.504 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:22.504 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:22.505 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:22.505 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:22.505 Found net devices under 0000:af:00.0: cvl_0_0 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:22.505 Found net devices under 0000:af:00.1: cvl_0_1 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:22.505 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:22.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:22.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:31:22.505 00:31:22.505 --- 10.0.0.2 ping statistics --- 00:31:22.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.505 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:22.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:22.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:31:22.506 00:31:22.506 --- 10.0.0.1 ping statistics --- 00:31:22.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.506 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=296512 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 296512 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 296512 ']' 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:22.506 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.506 [2024-12-11 10:09:31.946995] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:22.506 [2024-12-11 10:09:31.947849] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:31:22.506 [2024-12-11 10:09:31.947880] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.506 [2024-12-11 10:09:32.035969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:22.506 [2024-12-11 10:09:32.077110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.506 [2024-12-11 10:09:32.077150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.506 [2024-12-11 10:09:32.077158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:22.506 [2024-12-11 10:09:32.077164] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:22.506 [2024-12-11 10:09:32.077169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.765 [2024-12-11 10:09:32.078666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.765 [2024-12-11 10:09:32.078693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:22.765 [2024-12-11 10:09:32.078803] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.765 [2024-12-11 10:09:32.078804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:22.765 [2024-12-11 10:09:32.079146] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:23.333 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:23.333 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:31:23.333 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:23.333 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:23.333 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:23.333 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.333 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:23.333 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.333 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:23.333 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.333 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:23.333 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.333 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:23.333 [2024-12-11 10:09:32.883755] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:23.333 [2024-12-11 10:09:32.884644] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:23.333 [2024-12-11 10:09:32.884697] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:23.333 [2024-12-11 10:09:32.884840] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:23.333 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.333 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:23.333 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.333 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:23.333 [2024-12-11 10:09:32.895528] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:23.593 Malloc0 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:23.593 [2024-12-11 10:09:32.967793] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=296755 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=296757 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:23.593 { 00:31:23.593 "params": { 00:31:23.593 "name": "Nvme$subsystem", 00:31:23.593 "trtype": "$TEST_TRANSPORT", 00:31:23.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.593 "adrfam": "ipv4", 00:31:23.593 "trsvcid": "$NVMF_PORT", 00:31:23.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.593 "hdgst": ${hdgst:-false}, 00:31:23.593 "ddgst": ${ddgst:-false} 00:31:23.593 }, 00:31:23.593 "method": "bdev_nvme_attach_controller" 00:31:23.593 } 00:31:23.593 EOF 00:31:23.593 )") 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=296759 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:23.593 { 00:31:23.593 "params": { 00:31:23.593 "name": "Nvme$subsystem", 00:31:23.593 "trtype": "$TEST_TRANSPORT", 00:31:23.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.593 "adrfam": "ipv4", 00:31:23.593 "trsvcid": "$NVMF_PORT", 00:31:23.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.593 "hdgst": ${hdgst:-false}, 00:31:23.593 "ddgst": ${ddgst:-false} 00:31:23.593 }, 00:31:23.593 "method": "bdev_nvme_attach_controller" 00:31:23.593 } 00:31:23.593 EOF 00:31:23.593 )") 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=296762 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:23.593 { 00:31:23.593 "params": { 00:31:23.593 "name": "Nvme$subsystem", 00:31:23.593 "trtype": "$TEST_TRANSPORT", 00:31:23.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.593 "adrfam": "ipv4", 00:31:23.593 "trsvcid": "$NVMF_PORT", 00:31:23.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.593 "hdgst": ${hdgst:-false}, 00:31:23.593 "ddgst": ${ddgst:-false} 00:31:23.593 }, 00:31:23.593 "method": "bdev_nvme_attach_controller" 00:31:23.593 } 00:31:23.593 EOF 00:31:23.593 )") 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:23.593 { 00:31:23.593 "params": { 00:31:23.593 "name": "Nvme$subsystem", 00:31:23.593 "trtype": "$TEST_TRANSPORT", 00:31:23.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.593 "adrfam": "ipv4", 00:31:23.593 "trsvcid": "$NVMF_PORT", 00:31:23.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.593 "hdgst": ${hdgst:-false}, 00:31:23.593 "ddgst": ${ddgst:-false} 00:31:23.593 }, 00:31:23.593 "method": "bdev_nvme_attach_controller" 00:31:23.593 } 00:31:23.593 EOF 00:31:23.593 )") 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 296755 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:23.593 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:23.594 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:23.594 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:23.594 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:23.594 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:23.594 "params": { 00:31:23.594 "name": "Nvme1", 00:31:23.594 "trtype": "tcp", 00:31:23.594 "traddr": "10.0.0.2", 00:31:23.594 "adrfam": "ipv4", 00:31:23.594 "trsvcid": "4420", 00:31:23.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:23.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:23.594 "hdgst": false, 00:31:23.594 "ddgst": false 00:31:23.594 }, 00:31:23.594 "method": "bdev_nvme_attach_controller" 00:31:23.594 }' 00:31:23.594 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:23.594 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:23.594 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:23.594 "params": { 00:31:23.594 "name": "Nvme1", 00:31:23.594 "trtype": "tcp", 00:31:23.594 "traddr": "10.0.0.2", 00:31:23.594 "adrfam": "ipv4", 00:31:23.594 "trsvcid": "4420", 00:31:23.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:23.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:23.594 "hdgst": false, 00:31:23.594 "ddgst": false 00:31:23.594 }, 00:31:23.594 "method": "bdev_nvme_attach_controller" 00:31:23.594 }' 00:31:23.594 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:23.594 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:23.594 "params": { 00:31:23.594 "name": "Nvme1", 00:31:23.594 "trtype": "tcp", 00:31:23.594 "traddr": "10.0.0.2", 00:31:23.594 "adrfam": "ipv4", 00:31:23.594 "trsvcid": "4420", 00:31:23.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:23.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:23.594 "hdgst": false, 00:31:23.594 "ddgst": false 00:31:23.594 }, 00:31:23.594 "method": "bdev_nvme_attach_controller" 00:31:23.594 }' 00:31:23.594 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:23.594 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:23.594 "params": { 00:31:23.594 "name": "Nvme1", 00:31:23.594 "trtype": "tcp", 00:31:23.594 "traddr": "10.0.0.2", 00:31:23.594 "adrfam": "ipv4", 00:31:23.594 "trsvcid": "4420", 00:31:23.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:23.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:23.594 "hdgst": false, 00:31:23.594 "ddgst": false 00:31:23.594 }, 00:31:23.594 "method": "bdev_nvme_attach_controller" 00:31:23.594 }' 00:31:23.594 [2024-12-11 10:09:33.018819] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:31:23.594 [2024-12-11 10:09:33.018868] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:23.594 [2024-12-11 10:09:33.022093] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:31:23.594 [2024-12-11 10:09:33.022132] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:31:23.594 [2024-12-11 10:09:33.022169] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:31:23.594 [2024-12-11 10:09:33.022204] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:31:23.594 [2024-12-11 10:09:33.024424] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:31:23.594 [2024-12-11 10:09:33.024471] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:31:23.853 [2024-12-11 10:09:33.210932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.853 [2024-12-11 10:09:33.255124] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:31:23.853 [2024-12-11 10:09:33.300734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.853 [2024-12-11 10:09:33.345162] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:31:23.853 [2024-12-11 10:09:33.417399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.111 [2024-12-11 10:09:33.457878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.111 [2024-12-11 10:09:33.476315] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:31:24.111 [2024-12-11 10:09:33.499402] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:31:24.111 Running I/O for 1 seconds... 00:31:24.111 Running I/O for 1 seconds... 00:31:24.111 Running I/O for 1 seconds... 00:31:24.370 Running I/O for 1 seconds... 00:31:25.305 8773.00 IOPS, 34.27 MiB/s 00:31:25.305 Latency(us) 00:31:25.305 [2024-12-11T09:09:34.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.305 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:25.305 Nvme1n1 : 1.02 8782.42 34.31 0.00 0.00 14491.92 3620.08 22719.15 00:31:25.305 [2024-12-11T09:09:34.880Z] =================================================================================================================== 00:31:25.305 [2024-12-11T09:09:34.880Z] Total : 8782.42 34.31 0.00 0.00 14491.92 3620.08 22719.15 00:31:25.305 244680.00 IOPS, 955.78 MiB/s 00:31:25.305 Latency(us) 00:31:25.305 [2024-12-11T09:09:34.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.305 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:25.305 Nvme1n1 : 1.00 244306.15 954.32 0.00 0.00 521.29 220.40 1505.77 00:31:25.305 [2024-12-11T09:09:34.881Z] =================================================================================================================== 00:31:25.306 [2024-12-11T09:09:34.881Z] Total : 244306.15 954.32 0.00 0.00 521.29 220.40 1505.77 00:31:25.306 7921.00 IOPS, 30.94 MiB/s 00:31:25.306 Latency(us) 00:31:25.306 [2024-12-11T09:09:34.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.306 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:25.306 Nvme1n1 : 1.01 8003.28 31.26 0.00 0.00 15943.63 5211.67 24341.94 00:31:25.306 [2024-12-11T09:09:34.881Z] =================================================================================================================== 00:31:25.306 [2024-12-11T09:09:34.881Z] Total : 8003.28 31.26 0.00 0.00 15943.63 5211.67 24341.94 00:31:25.306 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 296757 00:31:25.306 14004.00 IOPS, 54.70 MiB/s 00:31:25.306 Latency(us) 00:31:25.306 [2024-12-11T09:09:34.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.306 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:25.306 Nvme1n1 : 1.00 14089.22 55.04 0.00 0.00 9066.13 2559.02 14105.84 00:31:25.306 [2024-12-11T09:09:34.881Z] =================================================================================================================== 00:31:25.306 [2024-12-11T09:09:34.881Z] Total : 14089.22 55.04 0.00 0.00 9066.13 2559.02 14105.84 00:31:25.306 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 296759 00:31:25.306 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 296762 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:25.565 rmmod nvme_tcp 00:31:25.565 rmmod nvme_fabrics 00:31:25.565 rmmod nvme_keyring 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 296512 ']' 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 296512 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 296512 ']' 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 296512 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:25.565 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 296512 00:31:25.565 10:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:25.565 10:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:25.565 10:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 296512' 00:31:25.565 killing process with pid 296512 00:31:25.565 10:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 296512 00:31:25.565 10:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 296512 00:31:25.825 10:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:25.825 10:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:25.825 10:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:25.825 10:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:25.825 10:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:31:25.825 10:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:25.825 10:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:31:25.825 10:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:25.825 10:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:25.825 10:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.825 10:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.825 10:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.730 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:27.730 00:31:27.730 real 0m12.287s 00:31:27.730 user 0m15.666s 00:31:27.730 sys 0m7.177s 00:31:27.730 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:27.730 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.730 ************************************ 00:31:27.730 END TEST nvmf_bdev_io_wait 00:31:27.730 ************************************ 00:31:27.730 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:27.990 ************************************ 00:31:27.990 START TEST nvmf_queue_depth 00:31:27.990 ************************************ 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:27.990 * Looking for test storage... 00:31:27.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:27.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.990 --rc genhtml_branch_coverage=1 00:31:27.990 --rc genhtml_function_coverage=1 00:31:27.990 --rc genhtml_legend=1 00:31:27.990 --rc geninfo_all_blocks=1 00:31:27.990 --rc geninfo_unexecuted_blocks=1 00:31:27.990 00:31:27.990 ' 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:27.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.990 --rc genhtml_branch_coverage=1 00:31:27.990 --rc genhtml_function_coverage=1 00:31:27.990 --rc genhtml_legend=1 00:31:27.990 --rc geninfo_all_blocks=1 00:31:27.990 --rc geninfo_unexecuted_blocks=1 00:31:27.990 00:31:27.990 ' 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:27.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.990 --rc genhtml_branch_coverage=1 00:31:27.990 --rc genhtml_function_coverage=1 00:31:27.990 --rc genhtml_legend=1 00:31:27.990 --rc geninfo_all_blocks=1 00:31:27.990 --rc geninfo_unexecuted_blocks=1 00:31:27.990 00:31:27.990 ' 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:27.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.990 --rc genhtml_branch_coverage=1 00:31:27.990 --rc genhtml_function_coverage=1 00:31:27.990 --rc genhtml_legend=1 00:31:27.990 --rc geninfo_all_blocks=1 00:31:27.990 --rc geninfo_unexecuted_blocks=1 00:31:27.990 00:31:27.990 ' 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.990 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:27.991 10:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:34.560 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:34.560 10:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:34.560 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:34.560 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:34.560 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:34.560 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:34.560 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.560 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:34.561 Found net devices under 0000:af:00.0: cvl_0_0 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:34.561 Found net devices under 0000:af:00.1: cvl_0_1 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:34.561 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:34.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:34.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:31:34.820 00:31:34.820 --- 10.0.0.2 ping statistics --- 00:31:34.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.820 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:34.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:34.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:31:34.820 00:31:34.820 --- 10.0.0.1 ping statistics --- 00:31:34.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.820 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=301010 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 301010 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 301010 ']' 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:34.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:34.820 10:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:34.820 [2024-12-11 10:09:44.346112] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:34.820 [2024-12-11 10:09:44.347024] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:31:34.820 [2024-12-11 10:09:44.347059] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.079 [2024-12-11 10:09:44.431729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.079 [2024-12-11 10:09:44.470699] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.079 [2024-12-11 10:09:44.470735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.079 [2024-12-11 10:09:44.470742] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.079 [2024-12-11 10:09:44.470748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.079 [2024-12-11 10:09:44.470753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.079 [2024-12-11 10:09:44.471305] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.079 [2024-12-11 10:09:44.538836] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:35.079 [2024-12-11 10:09:44.539048] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:35.646 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:35.646 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:35.646 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:35.646 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:35.646 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.904 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:35.904 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:35.904 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.905 [2024-12-11 10:09:45.239978] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.905 Malloc0 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.905 [2024-12-11 10:09:45.315941] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=301051 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 301051 /var/tmp/bdevperf.sock 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 301051 ']' 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:35.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:35.905 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.905 [2024-12-11 10:09:45.366667] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:31:35.905 [2024-12-11 10:09:45.366709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid301051 ] 00:31:35.905 [2024-12-11 10:09:45.444391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.163 [2024-12-11 10:09:45.485300] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.163 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:36.163 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:36.163 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:36.163 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.163 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:36.422 NVMe0n1 00:31:36.422 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.422 10:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:36.422 Running I/O for 10 seconds... 00:31:38.735 12052.00 IOPS, 47.08 MiB/s [2024-12-11T09:09:49.247Z] 12274.50 IOPS, 47.95 MiB/s [2024-12-11T09:09:50.184Z] 12312.33 IOPS, 48.10 MiB/s [2024-12-11T09:09:51.120Z] 12489.00 IOPS, 48.79 MiB/s [2024-12-11T09:09:52.056Z] 12478.20 IOPS, 48.74 MiB/s [2024-12-11T09:09:52.993Z] 12494.17 IOPS, 48.81 MiB/s [2024-12-11T09:09:53.929Z] 12564.29 IOPS, 49.08 MiB/s [2024-12-11T09:09:55.311Z] 12580.00 IOPS, 49.14 MiB/s [2024-12-11T09:09:56.248Z] 12620.89 IOPS, 49.30 MiB/s [2024-12-11T09:09:56.248Z] 12638.70 IOPS, 49.37 MiB/s 00:31:46.673 Latency(us) 00:31:46.673 [2024-12-11T09:09:56.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.673 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:46.673 Verification LBA range: start 0x0 length 0x4000 00:31:46.673 NVMe0n1 : 10.10 12615.75 49.28 0.00 0.00 80551.73 18225.25 59668.97 00:31:46.673 [2024-12-11T09:09:56.248Z] =================================================================================================================== 00:31:46.673 [2024-12-11T09:09:56.248Z] Total : 12615.75 49.28 0.00 0.00 80551.73 18225.25 59668.97 00:31:46.673 { 00:31:46.673 "results": [ 00:31:46.673 { 00:31:46.673 "job": "NVMe0n1", 00:31:46.673 "core_mask": "0x1", 00:31:46.673 "workload": "verify", 00:31:46.673 "status": "finished", 00:31:46.673 "verify_range": { 00:31:46.673 "start": 0, 00:31:46.673 "length": 16384 00:31:46.673 }, 00:31:46.673 "queue_depth": 1024, 00:31:46.673 "io_size": 4096, 00:31:46.673 "runtime": 10.099359, 00:31:46.673 "iops": 12615.751158068546, 00:31:46.673 "mibps": 49.28027796120526, 00:31:46.673 "io_failed": 0, 00:31:46.673 "io_timeout": 0, 00:31:46.673 "avg_latency_us": 80551.72797839463, 00:31:46.673 "min_latency_us": 18225.249523809525, 00:31:46.673 "max_latency_us": 59668.96761904762 00:31:46.673 } 00:31:46.673 ], 00:31:46.673 "core_count": 1 00:31:46.673 } 00:31:46.673 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 301051 00:31:46.673 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 301051 ']' 00:31:46.673 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 301051 00:31:46.673 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:46.673 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:46.673 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 301051 00:31:46.673 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:46.673 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:46.673 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 301051' 00:31:46.673 killing process with pid 301051 00:31:46.673 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 301051 00:31:46.673 Received shutdown signal, test time was about 10.000000 seconds 00:31:46.673 00:31:46.673 Latency(us) 00:31:46.673 [2024-12-11T09:09:56.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.673 [2024-12-11T09:09:56.248Z] =================================================================================================================== 00:31:46.673 [2024-12-11T09:09:56.248Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:46.673 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 301051 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:46.933 rmmod nvme_tcp 00:31:46.933 rmmod nvme_fabrics 00:31:46.933 rmmod nvme_keyring 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 301010 ']' 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 301010 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 301010 ']' 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 301010 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 301010 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 301010' 00:31:46.933 killing process with pid 301010 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 301010 00:31:46.933 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 301010 00:31:47.192 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:47.192 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:47.192 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:47.192 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:47.192 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:47.192 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:47.192 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:47.192 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:47.192 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:47.192 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.192 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.192 10:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.114 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:49.114 00:31:49.114 real 0m21.301s 00:31:49.114 user 0m23.365s 00:31:49.114 sys 0m6.965s 00:31:49.114 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.114 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:49.114 ************************************ 00:31:49.114 END TEST nvmf_queue_depth 00:31:49.114 ************************************ 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:49.458 ************************************ 00:31:49.458 START TEST nvmf_target_multipath 00:31:49.458 ************************************ 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:49.458 * Looking for test storage... 00:31:49.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:49.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.458 --rc genhtml_branch_coverage=1 00:31:49.458 --rc genhtml_function_coverage=1 00:31:49.458 --rc genhtml_legend=1 00:31:49.458 --rc geninfo_all_blocks=1 00:31:49.458 --rc geninfo_unexecuted_blocks=1 00:31:49.458 00:31:49.458 ' 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:49.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.458 --rc genhtml_branch_coverage=1 00:31:49.458 --rc genhtml_function_coverage=1 00:31:49.458 --rc genhtml_legend=1 00:31:49.458 --rc geninfo_all_blocks=1 00:31:49.458 --rc geninfo_unexecuted_blocks=1 00:31:49.458 00:31:49.458 ' 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:49.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.458 --rc genhtml_branch_coverage=1 00:31:49.458 --rc genhtml_function_coverage=1 00:31:49.458 --rc genhtml_legend=1 00:31:49.458 --rc geninfo_all_blocks=1 00:31:49.458 --rc geninfo_unexecuted_blocks=1 00:31:49.458 00:31:49.458 ' 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:49.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.458 --rc genhtml_branch_coverage=1 00:31:49.458 --rc genhtml_function_coverage=1 00:31:49.458 --rc genhtml_legend=1 00:31:49.458 --rc geninfo_all_blocks=1 00:31:49.458 --rc geninfo_unexecuted_blocks=1 00:31:49.458 00:31:49.458 ' 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.458 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:49.459 10:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:56.068 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:56.068 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:56.068 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:56.068 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:56.068 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:56.068 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:56.068 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:56.068 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:56.069 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:56.069 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:56.069 Found net devices under 0000:af:00.0: cvl_0_0 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:56.069 Found net devices under 0000:af:00.1: cvl_0_1 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:56.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:31:56.069 00:31:56.069 --- 10.0.0.2 ping statistics --- 00:31:56.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.069 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:31:56.069 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:56.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:31:56.069 00:31:56.069 --- 10.0.0.1 ping statistics --- 00:31:56.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.070 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:31:56.070 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.070 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:56.070 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:56.070 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.070 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:56.070 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:56.070 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.070 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:56.070 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:56.070 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:56.070 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:56.070 only one NIC for nvmf test 00:31:56.070 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:56.070 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:56.070 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:56.070 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:56.070 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:56.070 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:56.070 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:56.070 rmmod nvme_tcp 00:31:56.070 rmmod nvme_fabrics 00:31:56.329 rmmod nvme_keyring 00:31:56.329 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:56.329 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:56.329 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:56.329 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:56.329 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:56.329 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:56.329 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:56.329 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:56.329 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:56.329 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:56.329 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:56.329 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:56.329 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:56.329 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.329 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:56.329 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:58.233 00:31:58.233 real 0m9.064s 00:31:58.233 user 0m2.037s 00:31:58.233 sys 0m5.052s 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:58.233 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:58.233 ************************************ 00:31:58.234 END TEST nvmf_target_multipath 00:31:58.234 ************************************ 00:31:58.493 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:58.493 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:58.493 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:58.493 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:58.493 ************************************ 00:31:58.493 START TEST nvmf_zcopy 00:31:58.493 ************************************ 00:31:58.493 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:58.493 * Looking for test storage... 00:31:58.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:58.493 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:58.493 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:31:58.493 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:58.493 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:58.493 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:58.493 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:58.493 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:58.493 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:58.493 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:58.493 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:58.493 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:58.493 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:58.493 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:58.493 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:58.493 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:58.493 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:58.493 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:58.493 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:58.493 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:58.493 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:58.493 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:58.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.494 --rc genhtml_branch_coverage=1 00:31:58.494 --rc genhtml_function_coverage=1 00:31:58.494 --rc genhtml_legend=1 00:31:58.494 --rc geninfo_all_blocks=1 00:31:58.494 --rc geninfo_unexecuted_blocks=1 00:31:58.494 00:31:58.494 ' 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:58.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.494 --rc genhtml_branch_coverage=1 00:31:58.494 --rc genhtml_function_coverage=1 00:31:58.494 --rc genhtml_legend=1 00:31:58.494 --rc geninfo_all_blocks=1 00:31:58.494 --rc geninfo_unexecuted_blocks=1 00:31:58.494 00:31:58.494 ' 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:58.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.494 --rc genhtml_branch_coverage=1 00:31:58.494 --rc genhtml_function_coverage=1 00:31:58.494 --rc genhtml_legend=1 00:31:58.494 --rc geninfo_all_blocks=1 00:31:58.494 --rc geninfo_unexecuted_blocks=1 00:31:58.494 00:31:58.494 ' 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:58.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.494 --rc genhtml_branch_coverage=1 00:31:58.494 --rc genhtml_function_coverage=1 00:31:58.494 --rc genhtml_legend=1 00:31:58.494 --rc geninfo_all_blocks=1 00:31:58.494 --rc geninfo_unexecuted_blocks=1 00:31:58.494 00:31:58.494 ' 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:58.494 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:58.754 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:58.754 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:58.754 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.754 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:58.754 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:58.754 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:58.754 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.754 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.754 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.754 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:58.754 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:58.754 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:58.754 10:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:05.323 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:05.323 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:05.323 Found net devices under 0000:af:00.0: cvl_0_0 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:05.323 Found net devices under 0000:af:00.1: cvl_0_1 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:05.323 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:05.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:05.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:32:05.324 00:32:05.324 --- 10.0.0.2 ping statistics --- 00:32:05.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.324 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:05.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:05.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:32:05.324 00:32:05.324 --- 10.0.0.1 ping statistics --- 00:32:05.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.324 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=310621 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 310621 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 310621 ']' 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:05.324 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.324 [2024-12-11 10:10:14.803760] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:05.324 [2024-12-11 10:10:14.804707] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:32:05.324 [2024-12-11 10:10:14.804743] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:05.324 [2024-12-11 10:10:14.888451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.583 [2024-12-11 10:10:14.927465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:05.583 [2024-12-11 10:10:14.927495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:05.583 [2024-12-11 10:10:14.927502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:05.583 [2024-12-11 10:10:14.927508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:05.583 [2024-12-11 10:10:14.927513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:05.583 [2024-12-11 10:10:14.928010] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.583 [2024-12-11 10:10:14.994180] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:05.583 [2024-12-11 10:10:14.994396] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.583 [2024-12-11 10:10:15.060699] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.583 [2024-12-11 10:10:15.088913] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.583 malloc0 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:05.583 { 00:32:05.583 "params": { 00:32:05.583 "name": "Nvme$subsystem", 00:32:05.583 "trtype": "$TEST_TRANSPORT", 00:32:05.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:05.583 "adrfam": "ipv4", 00:32:05.583 "trsvcid": "$NVMF_PORT", 00:32:05.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:05.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:05.583 "hdgst": ${hdgst:-false}, 00:32:05.583 "ddgst": ${ddgst:-false} 00:32:05.583 }, 00:32:05.583 "method": "bdev_nvme_attach_controller" 00:32:05.583 } 00:32:05.583 EOF 00:32:05.583 )") 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:05.583 10:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:05.583 "params": { 00:32:05.583 "name": "Nvme1", 00:32:05.583 "trtype": "tcp", 00:32:05.583 "traddr": "10.0.0.2", 00:32:05.583 "adrfam": "ipv4", 00:32:05.583 "trsvcid": "4420", 00:32:05.583 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:05.583 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:05.583 "hdgst": false, 00:32:05.583 "ddgst": false 00:32:05.583 }, 00:32:05.583 "method": "bdev_nvme_attach_controller" 00:32:05.583 }' 00:32:05.841 [2024-12-11 10:10:15.185288] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:32:05.841 [2024-12-11 10:10:15.185329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310643 ] 00:32:05.841 [2024-12-11 10:10:15.263259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.841 [2024-12-11 10:10:15.302535] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.100 Running I/O for 10 seconds... 00:32:08.413 8566.00 IOPS, 66.92 MiB/s [2024-12-11T09:10:18.925Z] 8621.50 IOPS, 67.36 MiB/s [2024-12-11T09:10:19.859Z] 8640.00 IOPS, 67.50 MiB/s [2024-12-11T09:10:20.795Z] 8648.75 IOPS, 67.57 MiB/s [2024-12-11T09:10:21.731Z] 8627.80 IOPS, 67.40 MiB/s [2024-12-11T09:10:23.107Z] 8628.00 IOPS, 67.41 MiB/s [2024-12-11T09:10:24.044Z] 8633.29 IOPS, 67.45 MiB/s [2024-12-11T09:10:24.980Z] 8631.38 IOPS, 67.43 MiB/s [2024-12-11T09:10:25.916Z] 8639.78 IOPS, 67.50 MiB/s [2024-12-11T09:10:25.916Z] 8639.40 IOPS, 67.50 MiB/s 00:32:16.341 Latency(us) 00:32:16.341 [2024-12-11T09:10:25.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.341 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:32:16.341 Verification LBA range: start 0x0 length 0x1000 00:32:16.341 Nvme1n1 : 10.01 8643.70 67.53 0.00 0.00 14766.44 2278.16 20971.52 00:32:16.341 [2024-12-11T09:10:25.916Z] =================================================================================================================== 00:32:16.341 [2024-12-11T09:10:25.916Z] Total : 8643.70 67.53 0.00 0.00 14766.44 2278.16 20971.52 00:32:16.341 10:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=312262 00:32:16.341 10:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:32:16.341 10:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:16.341 10:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:32:16.341 10:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:32:16.341 10:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:16.341 10:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:16.341 10:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:16.341 10:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:16.341 { 00:32:16.341 "params": { 00:32:16.341 "name": "Nvme$subsystem", 00:32:16.341 "trtype": "$TEST_TRANSPORT", 00:32:16.341 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:16.341 "adrfam": "ipv4", 00:32:16.341 "trsvcid": "$NVMF_PORT", 00:32:16.341 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:16.341 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:16.341 "hdgst": ${hdgst:-false}, 00:32:16.341 "ddgst": ${ddgst:-false} 00:32:16.341 }, 00:32:16.341 "method": "bdev_nvme_attach_controller" 00:32:16.341 } 00:32:16.341 EOF 00:32:16.341 )") 00:32:16.341 [2024-12-11 10:10:25.856351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.341 [2024-12-11 10:10:25.856382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.341 10:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:16.341 10:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:16.341 10:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:16.341 10:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:16.341 "params": { 00:32:16.341 "name": "Nvme1", 00:32:16.341 "trtype": "tcp", 00:32:16.341 "traddr": "10.0.0.2", 00:32:16.341 "adrfam": "ipv4", 00:32:16.341 "trsvcid": "4420", 00:32:16.341 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:16.341 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:16.341 "hdgst": false, 00:32:16.341 "ddgst": false 00:32:16.341 }, 00:32:16.341 "method": "bdev_nvme_attach_controller" 00:32:16.341 }' 00:32:16.341 [2024-12-11 10:10:25.868312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.341 [2024-12-11 10:10:25.868325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.341 [2024-12-11 10:10:25.880311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.341 [2024-12-11 10:10:25.880321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.341 [2024-12-11 10:10:25.892309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.341 [2024-12-11 10:10:25.892319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.341 [2024-12-11 10:10:25.893551] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:32:16.341 [2024-12-11 10:10:25.893592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid312262 ] 00:32:16.341 [2024-12-11 10:10:25.904311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.341 [2024-12-11 10:10:25.904323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:25.916311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:25.916321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:25.928311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:25.928320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:25.940310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:25.940319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:25.952315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:25.952333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:25.964310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:25.964319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:25.973848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.601 [2024-12-11 10:10:25.976309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:25.976331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:25.988314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:25.988329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:26.000309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:26.000319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:26.012310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:26.012321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:26.014157] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.601 [2024-12-11 10:10:26.024316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:26.024331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:26.036323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:26.036343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:26.048314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:26.048327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:26.060311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:26.060323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:26.072331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:26.072344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:26.084310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:26.084321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:26.096320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:26.096336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:26.108319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:26.108337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:26.120317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:26.120331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:26.132319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:26.132334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:26.144314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:26.144327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:26.156309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:26.156319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.601 [2024-12-11 10:10:26.168309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.601 [2024-12-11 10:10:26.168323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 [2024-12-11 10:10:26.180315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.180328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 [2024-12-11 10:10:26.192325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.192334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 [2024-12-11 10:10:26.204311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.204319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 [2024-12-11 10:10:26.216311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.216319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 [2024-12-11 10:10:26.228314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.228327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 [2024-12-11 10:10:26.240311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.240321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 [2024-12-11 10:10:26.252311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.252320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 [2024-12-11 10:10:26.264310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.264319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 [2024-12-11 10:10:26.276318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.276335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 Running I/O for 5 seconds... 00:32:16.860 [2024-12-11 10:10:26.292415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.292434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 [2024-12-11 10:10:26.306705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.306724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 [2024-12-11 10:10:26.321270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.321288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 [2024-12-11 10:10:26.336488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.336507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 [2024-12-11 10:10:26.347957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.347976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 [2024-12-11 10:10:26.362241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.362259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 [2024-12-11 10:10:26.377509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.377527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 [2024-12-11 10:10:26.392360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.392378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 [2024-12-11 10:10:26.404628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.404644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.860 [2024-12-11 10:10:26.420047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.860 [2024-12-11 10:10:26.420068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.119 [2024-12-11 10:10:26.434715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.119 [2024-12-11 10:10:26.434733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.119 [2024-12-11 10:10:26.449101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.119 [2024-12-11 10:10:26.449119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.119 [2024-12-11 10:10:26.464181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.119 [2024-12-11 10:10:26.464200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.119 [2024-12-11 10:10:26.477685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.119 [2024-12-11 10:10:26.477703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.119 [2024-12-11 10:10:26.492623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.119 [2024-12-11 10:10:26.492640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.119 [2024-12-11 10:10:26.507685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.119 [2024-12-11 10:10:26.507704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.119 [2024-12-11 10:10:26.522656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.119 [2024-12-11 10:10:26.522674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.119 [2024-12-11 10:10:26.537116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.119 [2024-12-11 10:10:26.537135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.119 [2024-12-11 10:10:26.551702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.119 [2024-12-11 10:10:26.551719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.119 [2024-12-11 10:10:26.565295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.119 [2024-12-11 10:10:26.565313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.119 [2024-12-11 10:10:26.580816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.119 [2024-12-11 10:10:26.580833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.119 [2024-12-11 10:10:26.595469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.119 [2024-12-11 10:10:26.595487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.119 [2024-12-11 10:10:26.610498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.119 [2024-12-11 10:10:26.610517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.119 [2024-12-11 10:10:26.625121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.119 [2024-12-11 10:10:26.625138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.119 [2024-12-11 10:10:26.640429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.119 [2024-12-11 10:10:26.640446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.119 [2024-12-11 10:10:26.652797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.119 [2024-12-11 10:10:26.652814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.119 [2024-12-11 10:10:26.665964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.119 [2024-12-11 10:10:26.665982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.119 [2024-12-11 10:10:26.680911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.119 [2024-12-11 10:10:26.680929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.378 [2024-12-11 10:10:26.696719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.378 [2024-12-11 10:10:26.696736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.378 [2024-12-11 10:10:26.709781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.378 [2024-12-11 10:10:26.709799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.378 [2024-12-11 10:10:26.724690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.378 [2024-12-11 10:10:26.724708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.378 [2024-12-11 10:10:26.739932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.378 [2024-12-11 10:10:26.739949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.378 [2024-12-11 10:10:26.753842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.378 [2024-12-11 10:10:26.753860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.378 [2024-12-11 10:10:26.768920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.378 [2024-12-11 10:10:26.768937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.378 [2024-12-11 10:10:26.784393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.378 [2024-12-11 10:10:26.784410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.378 [2024-12-11 10:10:26.795384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.378 [2024-12-11 10:10:26.795401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.378 [2024-12-11 10:10:26.810355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.378 [2024-12-11 10:10:26.810373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.378 [2024-12-11 10:10:26.825142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.378 [2024-12-11 10:10:26.825159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.378 [2024-12-11 10:10:26.839767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.378 [2024-12-11 10:10:26.839784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.378 [2024-12-11 10:10:26.853718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.378 [2024-12-11 10:10:26.853736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.378 [2024-12-11 10:10:26.868329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.378 [2024-12-11 10:10:26.868348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.378 [2024-12-11 10:10:26.881261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.378 [2024-12-11 10:10:26.881279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.378 [2024-12-11 10:10:26.895832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.378 [2024-12-11 10:10:26.895849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.378 [2024-12-11 10:10:26.909407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.378 [2024-12-11 10:10:26.909425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.378 [2024-12-11 10:10:26.924560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.378 [2024-12-11 10:10:26.924579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.378 [2024-12-11 10:10:26.937376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.378 [2024-12-11 10:10:26.937394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:26.952436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:26.952455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:26.966479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:26.966497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:26.981315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:26.981333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:26.995975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:26.995994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:27.010379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:27.010398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:27.024940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:27.024958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:27.037003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:27.037020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:27.049620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:27.049639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:27.064371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:27.064389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:27.076386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:27.076404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:27.090190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:27.090207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:27.104794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:27.104812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:27.120622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:27.120639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:27.136907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:27.136925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:27.152020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:27.152037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:27.165551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:27.165569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:27.179848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:27.179866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:27.192988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:27.193006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.637 [2024-12-11 10:10:27.207763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.637 [2024-12-11 10:10:27.207782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.896 [2024-12-11 10:10:27.221997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.896 [2024-12-11 10:10:27.222017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.896 [2024-12-11 10:10:27.236620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.896 [2024-12-11 10:10:27.236638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.896 [2024-12-11 10:10:27.250379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.896 [2024-12-11 10:10:27.250398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.896 [2024-12-11 10:10:27.265375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.896 [2024-12-11 10:10:27.265395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.896 [2024-12-11 10:10:27.280785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.896 [2024-12-11 10:10:27.280803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.896 16801.00 IOPS, 131.26 MiB/s [2024-12-11T09:10:27.471Z] [2024-12-11 10:10:27.296861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.896 [2024-12-11 10:10:27.296879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.896 [2024-12-11 10:10:27.311403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.896 [2024-12-11 10:10:27.311421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.896 [2024-12-11 10:10:27.325942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.896 [2024-12-11 10:10:27.325960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.896 [2024-12-11 10:10:27.340558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.896 [2024-12-11 10:10:27.340576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.896 [2024-12-11 10:10:27.353084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.896 [2024-12-11 10:10:27.353102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.896 [2024-12-11 10:10:27.368299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.896 [2024-12-11 10:10:27.368318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.896 [2024-12-11 10:10:27.381140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.896 [2024-12-11 10:10:27.381158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.896 [2024-12-11 10:10:27.396534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.896 [2024-12-11 10:10:27.396552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.896 [2024-12-11 10:10:27.409547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.896 [2024-12-11 10:10:27.409565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.896 [2024-12-11 10:10:27.424633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.896 [2024-12-11 10:10:27.424651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.896 [2024-12-11 10:10:27.439829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.896 [2024-12-11 10:10:27.439848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.896 [2024-12-11 10:10:27.454371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.896 [2024-12-11 10:10:27.454390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.896 [2024-12-11 10:10:27.468889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.896 [2024-12-11 10:10:27.468907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.155 [2024-12-11 10:10:27.484077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.155 [2024-12-11 10:10:27.484095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.155 [2024-12-11 10:10:27.498235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.155 [2024-12-11 10:10:27.498258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.155 [2024-12-11 10:10:27.512667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.155 [2024-12-11 10:10:27.512685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.155 [2024-12-11 10:10:27.527981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.155 [2024-12-11 10:10:27.527999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.155 [2024-12-11 10:10:27.540882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.155 [2024-12-11 10:10:27.540899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.155 [2024-12-11 10:10:27.553707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.155 [2024-12-11 10:10:27.553725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.155 [2024-12-11 10:10:27.568623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.155 [2024-12-11 10:10:27.568641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.155 [2024-12-11 10:10:27.584303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.155 [2024-12-11 10:10:27.584321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.155 [2024-12-11 10:10:27.598346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.155 [2024-12-11 10:10:27.598364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.155 [2024-12-11 10:10:27.613088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.155 [2024-12-11 10:10:27.613107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.155 [2024-12-11 10:10:27.628458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.155 [2024-12-11 10:10:27.628486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.155 [2024-12-11 10:10:27.642080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.155 [2024-12-11 10:10:27.642098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.155 [2024-12-11 10:10:27.657400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.155 [2024-12-11 10:10:27.657418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.155 [2024-12-11 10:10:27.671948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.155 [2024-12-11 10:10:27.671966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.155 [2024-12-11 10:10:27.686427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.155 [2024-12-11 10:10:27.686445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.155 [2024-12-11 10:10:27.701099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.155 [2024-12-11 10:10:27.701117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.155 [2024-12-11 10:10:27.716741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.155 [2024-12-11 10:10:27.716758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.415 [2024-12-11 10:10:27.731715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.415 [2024-12-11 10:10:27.731733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.415 [2024-12-11 10:10:27.745934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.415 [2024-12-11 10:10:27.745951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.415 [2024-12-11 10:10:27.760886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.415 [2024-12-11 10:10:27.760904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.415 [2024-12-11 10:10:27.775897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.415 [2024-12-11 10:10:27.775919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.415 [2024-12-11 10:10:27.790498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.415 [2024-12-11 10:10:27.790516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.415 [2024-12-11 10:10:27.804956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.415 [2024-12-11 10:10:27.804973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.415 [2024-12-11 10:10:27.820309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.415 [2024-12-11 10:10:27.820327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.415 [2024-12-11 10:10:27.834198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.415 [2024-12-11 10:10:27.834215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.415 [2024-12-11 10:10:27.848632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.415 [2024-12-11 10:10:27.848648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.415 [2024-12-11 10:10:27.864374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.415 [2024-12-11 10:10:27.864392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.415 [2024-12-11 10:10:27.877239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.415 [2024-12-11 10:10:27.877257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.415 [2024-12-11 10:10:27.891884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.415 [2024-12-11 10:10:27.891903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.415 [2024-12-11 10:10:27.905335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.415 [2024-12-11 10:10:27.905352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.415 [2024-12-11 10:10:27.916527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.415 [2024-12-11 10:10:27.916544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.415 [2024-12-11 10:10:27.929882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.415 [2024-12-11 10:10:27.929899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.415 [2024-12-11 10:10:27.944201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.415 [2024-12-11 10:10:27.944224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.415 [2024-12-11 10:10:27.958108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.415 [2024-12-11 10:10:27.958125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.415 [2024-12-11 10:10:27.972929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.415 [2024-12-11 10:10:27.972947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:27.988701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:27.988719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:28.003788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:28.003806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:28.018355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:28.018373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:28.033082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:28.033099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:28.044821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:28.044842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:28.058036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:28.058054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:28.072480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:28.072499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:28.083495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:28.083515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:28.098107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:28.098125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:28.112574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:28.112592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:28.124762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:28.124779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:28.140354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:28.140372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:28.154662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:28.154680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:28.169611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:28.169629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:28.184358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:28.184376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:28.198670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:28.198687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:28.213166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:28.213182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:28.227820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:28.227839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.674 [2024-12-11 10:10:28.240719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.674 [2024-12-11 10:10:28.240735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.933 [2024-12-11 10:10:28.254156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.933 [2024-12-11 10:10:28.254173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.933 [2024-12-11 10:10:28.268929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.933 [2024-12-11 10:10:28.268947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.933 [2024-12-11 10:10:28.284752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.933 [2024-12-11 10:10:28.284769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.933 16825.00 IOPS, 131.45 MiB/s [2024-12-11T09:10:28.508Z] [2024-12-11 10:10:28.298384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.933 [2024-12-11 10:10:28.298401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.933 [2024-12-11 10:10:28.313147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.933 [2024-12-11 10:10:28.313164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.933 [2024-12-11 10:10:28.328726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.933 [2024-12-11 10:10:28.328743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.933 [2024-12-11 10:10:28.344519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.933 [2024-12-11 10:10:28.344536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.933 [2024-12-11 10:10:28.356858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.933 [2024-12-11 10:10:28.356875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.933 [2024-12-11 10:10:28.370438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.933 [2024-12-11 10:10:28.370455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.933 [2024-12-11 10:10:28.386051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.933 [2024-12-11 10:10:28.386069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.933 [2024-12-11 10:10:28.400478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.933 [2024-12-11 10:10:28.400495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.933 [2024-12-11 10:10:28.411381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.933 [2024-12-11 10:10:28.411399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.933 [2024-12-11 10:10:28.426395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.933 [2024-12-11 10:10:28.426412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.933 [2024-12-11 10:10:28.440764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.933 [2024-12-11 10:10:28.440781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.933 [2024-12-11 10:10:28.453961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.933 [2024-12-11 10:10:28.453978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.933 [2024-12-11 10:10:28.468319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.933 [2024-12-11 10:10:28.468337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.933 [2024-12-11 10:10:28.479937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.933 [2024-12-11 10:10:28.479954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.933 [2024-12-11 10:10:28.494535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.933 [2024-12-11 10:10:28.494552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.509387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.509405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.524708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.524726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.540520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.540538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.551828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.551847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.566458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.566476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.581045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.581062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.596000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.596018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.609089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.609106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.624184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.624202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.638141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.638161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.652973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.652991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.667745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.667764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.681706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.681725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.696137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.696156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.710189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.710207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.725092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.725111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.736967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.736985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.752043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.752061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.192 [2024-12-11 10:10:28.765028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.192 [2024-12-11 10:10:28.765046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.451 [2024-12-11 10:10:28.779967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.451 [2024-12-11 10:10:28.779985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.451 [2024-12-11 10:10:28.791506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.451 [2024-12-11 10:10:28.791525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.451 [2024-12-11 10:10:28.806823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.451 [2024-12-11 10:10:28.806841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.451 [2024-12-11 10:10:28.821488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.451 [2024-12-11 10:10:28.821506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.451 [2024-12-11 10:10:28.836417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.451 [2024-12-11 10:10:28.836435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.451 [2024-12-11 10:10:28.849338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.451 [2024-12-11 10:10:28.849356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.451 [2024-12-11 10:10:28.864746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.451 [2024-12-11 10:10:28.864763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.451 [2024-12-11 10:10:28.880310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.451 [2024-12-11 10:10:28.880328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.451 [2024-12-11 10:10:28.893874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.451 [2024-12-11 10:10:28.893892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.451 [2024-12-11 10:10:28.908470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.451 [2024-12-11 10:10:28.908489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.451 [2024-12-11 10:10:28.922131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.451 [2024-12-11 10:10:28.922150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.451 [2024-12-11 10:10:28.936777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.451 [2024-12-11 10:10:28.936795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.451 [2024-12-11 10:10:28.952611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.451 [2024-12-11 10:10:28.952630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.451 [2024-12-11 10:10:28.963726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.451 [2024-12-11 10:10:28.963745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.451 [2024-12-11 10:10:28.978111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.451 [2024-12-11 10:10:28.978129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.451 [2024-12-11 10:10:28.993233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.451 [2024-12-11 10:10:28.993251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.451 [2024-12-11 10:10:29.008731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.451 [2024-12-11 10:10:29.008749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.451 [2024-12-11 10:10:29.023961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.452 [2024-12-11 10:10:29.023979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.710 [2024-12-11 10:10:29.038403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.710 [2024-12-11 10:10:29.038420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-11 10:10:29.053443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-11 10:10:29.053460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-11 10:10:29.068199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-11 10:10:29.068222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-11 10:10:29.081144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-11 10:10:29.081162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-11 10:10:29.095960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-11 10:10:29.095978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-11 10:10:29.110213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-11 10:10:29.110243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-11 10:10:29.125084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-11 10:10:29.125101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-11 10:10:29.139820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-11 10:10:29.139838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-11 10:10:29.152496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-11 10:10:29.152514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-11 10:10:29.165846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-11 10:10:29.165864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-11 10:10:29.180916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-11 10:10:29.180934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-11 10:10:29.196526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-11 10:10:29.196544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-11 10:10:29.206910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-11 10:10:29.206929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-11 10:10:29.221540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-11 10:10:29.221557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-11 10:10:29.236330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-11 10:10:29.236347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-11 10:10:29.249733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-11 10:10:29.249750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-11 10:10:29.260810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-11 10:10:29.260826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.711 [2024-12-11 10:10:29.275498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.711 [2024-12-11 10:10:29.275515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-11 10:10:29.289224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-11 10:10:29.289242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 16813.67 IOPS, 131.36 MiB/s [2024-12-11T09:10:29.545Z] [2024-12-11 10:10:29.304467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-11 10:10:29.304493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-11 10:10:29.318094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-11 10:10:29.318112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-11 10:10:29.332671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-11 10:10:29.332687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-11 10:10:29.348094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-11 10:10:29.348112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-11 10:10:29.361155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-11 10:10:29.361172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-11 10:10:29.376556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-11 10:10:29.376578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-11 10:10:29.388052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-11 10:10:29.388069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-11 10:10:29.402098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-11 10:10:29.402116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-11 10:10:29.416817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-11 10:10:29.416834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-11 10:10:29.431891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-11 10:10:29.431909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-11 10:10:29.446117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-11 10:10:29.446135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-11 10:10:29.460556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-11 10:10:29.460573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-11 10:10:29.473113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-11 10:10:29.473130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-11 10:10:29.487621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-11 10:10:29.487639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-11 10:10:29.501956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-11 10:10:29.501975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-11 10:10:29.516687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-11 10:10:29.516704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.970 [2024-12-11 10:10:29.532255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.970 [2024-12-11 10:10:29.532273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.545310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.545328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.560001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.560019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.573398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.573416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.588381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.588399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.601894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.601911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.616339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.616366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.627606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.627624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.641962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.641984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.656397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.656415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.669887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.669904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.684446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.684474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.695568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.695585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.710325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.710342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.725226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.725243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.740778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.740795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.753425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.753442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.768525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.768542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.781158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.781175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.229 [2024-12-11 10:10:29.793837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.229 [2024-12-11 10:10:29.793855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.488 [2024-12-11 10:10:29.808491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.488 [2024-12-11 10:10:29.808509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.488 [2024-12-11 10:10:29.818905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.488 [2024-12-11 10:10:29.818922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.488 [2024-12-11 10:10:29.833847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.488 [2024-12-11 10:10:29.833865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.488 [2024-12-11 10:10:29.848737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.488 [2024-12-11 10:10:29.848754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.488 [2024-12-11 10:10:29.864356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.488 [2024-12-11 10:10:29.864373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.488 [2024-12-11 10:10:29.877858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.488 [2024-12-11 10:10:29.877875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.488 [2024-12-11 10:10:29.892565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.488 [2024-12-11 10:10:29.892582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.488 [2024-12-11 10:10:29.906344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.488 [2024-12-11 10:10:29.906362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.488 [2024-12-11 10:10:29.921036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.488 [2024-12-11 10:10:29.921053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.489 [2024-12-11 10:10:29.932791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.489 [2024-12-11 10:10:29.932809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.489 [2024-12-11 10:10:29.946047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.489 [2024-12-11 10:10:29.946066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.489 [2024-12-11 10:10:29.960792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.489 [2024-12-11 10:10:29.960810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.489 [2024-12-11 10:10:29.976622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.489 [2024-12-11 10:10:29.976639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.489 [2024-12-11 10:10:29.989636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.489 [2024-12-11 10:10:29.989654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.489 [2024-12-11 10:10:30.000768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.489 [2024-12-11 10:10:30.000785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.489 [2024-12-11 10:10:30.017180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.489 [2024-12-11 10:10:30.017199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.489 [2024-12-11 10:10:30.031977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.489 [2024-12-11 10:10:30.031997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.489 [2024-12-11 10:10:30.044236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.489 [2024-12-11 10:10:30.044255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.489 [2024-12-11 10:10:30.057545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.489 [2024-12-11 10:10:30.057563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.748 [2024-12-11 10:10:30.072199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.748 [2024-12-11 10:10:30.072226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.748 [2024-12-11 10:10:30.084711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.748 [2024-12-11 10:10:30.084730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.748 [2024-12-11 10:10:30.100693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.748 [2024-12-11 10:10:30.100712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.748 [2024-12-11 10:10:30.116327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.748 [2024-12-11 10:10:30.116347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.748 [2024-12-11 10:10:30.129842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.748 [2024-12-11 10:10:30.129861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.748 [2024-12-11 10:10:30.144954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.748 [2024-12-11 10:10:30.144972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.748 [2024-12-11 10:10:30.159750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.748 [2024-12-11 10:10:30.159769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.748 [2024-12-11 10:10:30.174374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.748 [2024-12-11 10:10:30.174393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.748 [2024-12-11 10:10:30.188903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.748 [2024-12-11 10:10:30.188921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.748 [2024-12-11 10:10:30.203985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.748 [2024-12-11 10:10:30.204003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.748 [2024-12-11 10:10:30.218450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.748 [2024-12-11 10:10:30.218468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.748 [2024-12-11 10:10:30.233478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.748 [2024-12-11 10:10:30.233496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.748 [2024-12-11 10:10:30.248157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.748 [2024-12-11 10:10:30.248176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.748 [2024-12-11 10:10:30.261239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.748 [2024-12-11 10:10:30.261258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.748 [2024-12-11 10:10:30.273985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.748 [2024-12-11 10:10:30.274003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.748 [2024-12-11 10:10:30.288610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.748 [2024-12-11 10:10:30.288627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.748 16803.50 IOPS, 131.28 MiB/s [2024-12-11T09:10:30.323Z] [2024-12-11 10:10:30.302412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.748 [2024-12-11 10:10:30.302430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.748 [2024-12-11 10:10:30.316860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.748 [2024-12-11 10:10:30.316879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.007 [2024-12-11 10:10:30.332530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.007 [2024-12-11 10:10:30.332548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.007 [2024-12-11 10:10:30.344598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.007 [2024-12-11 10:10:30.344616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.007 [2024-12-11 10:10:30.358064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.008 [2024-12-11 10:10:30.358082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.008 [2024-12-11 10:10:30.373246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.008 [2024-12-11 10:10:30.373264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.008 [2024-12-11 10:10:30.383795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.008 [2024-12-11 10:10:30.383813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.008 [2024-12-11 10:10:30.398712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.008 [2024-12-11 10:10:30.398730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.008 [2024-12-11 10:10:30.413584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.008 [2024-12-11 10:10:30.413602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.008 [2024-12-11 10:10:30.428106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.008 [2024-12-11 10:10:30.428125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.008 [2024-12-11 10:10:30.441871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.008 [2024-12-11 10:10:30.441889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.008 [2024-12-11 10:10:30.456728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.008 [2024-12-11 10:10:30.456746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.008 [2024-12-11 10:10:30.471891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.008 [2024-12-11 10:10:30.471910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.008 [2024-12-11 10:10:30.486034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.008 [2024-12-11 10:10:30.486051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.008 [2024-12-11 10:10:30.500925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.008 [2024-12-11 10:10:30.500942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.008 [2024-12-11 10:10:30.516722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.008 [2024-12-11 10:10:30.516739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.008 [2024-12-11 10:10:30.528779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.008 [2024-12-11 10:10:30.528796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.008 [2024-12-11 10:10:30.544661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.008 [2024-12-11 10:10:30.544678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.008 [2024-12-11 10:10:30.556737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.008 [2024-12-11 10:10:30.556754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.008 [2024-12-11 10:10:30.570073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.008 [2024-12-11 10:10:30.570091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.585517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.585534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.600223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.600241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.611003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.611020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.625912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.625929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.640810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.640827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.657361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.657379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.667495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.667513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.682086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.682103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.696477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.696499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.708906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.708924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.722206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.722230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.737201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.737226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.752038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.752056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.765368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.765386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.780297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.780314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.793690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.793707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.808722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.808739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.823983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.824001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.267 [2024-12-11 10:10:30.837290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.267 [2024-12-11 10:10:30.837308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.526 [2024-12-11 10:10:30.852199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.526 [2024-12-11 10:10:30.852223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.526 [2024-12-11 10:10:30.863018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.526 [2024-12-11 10:10:30.863035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.526 [2024-12-11 10:10:30.878098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.526 [2024-12-11 10:10:30.878116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.526 [2024-12-11 10:10:30.892918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.526 [2024-12-11 10:10:30.892935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.526 [2024-12-11 10:10:30.908641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.526 [2024-12-11 10:10:30.908658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.526 [2024-12-11 10:10:30.924119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.526 [2024-12-11 10:10:30.924137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.526 [2024-12-11 10:10:30.938214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.526 [2024-12-11 10:10:30.938236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.526 [2024-12-11 10:10:30.953049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.526 [2024-12-11 10:10:30.953067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.526 [2024-12-11 10:10:30.968378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.526 [2024-12-11 10:10:30.968415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.526 [2024-12-11 10:10:30.980087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.526 [2024-12-11 10:10:30.980106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.526 [2024-12-11 10:10:30.994019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.526 [2024-12-11 10:10:30.994037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.526 [2024-12-11 10:10:31.008980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.526 [2024-12-11 10:10:31.008998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.526 [2024-12-11 10:10:31.020500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.526 [2024-12-11 10:10:31.020517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.526 [2024-12-11 10:10:31.036686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.526 [2024-12-11 10:10:31.036704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.526 [2024-12-11 10:10:31.049196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.526 [2024-12-11 10:10:31.049213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.526 [2024-12-11 10:10:31.064101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.526 [2024-12-11 10:10:31.064119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.526 [2024-12-11 10:10:31.075836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.526 [2024-12-11 10:10:31.075854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.526 [2024-12-11 10:10:31.089934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.526 [2024-12-11 10:10:31.089954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.785 [2024-12-11 10:10:31.104518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.785 [2024-12-11 10:10:31.104536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.785 [2024-12-11 10:10:31.117123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.785 [2024-12-11 10:10:31.117140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.785 [2024-12-11 10:10:31.132173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.785 [2024-12-11 10:10:31.132191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.785 [2024-12-11 10:10:31.145673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.785 [2024-12-11 10:10:31.145690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.785 [2024-12-11 10:10:31.160509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.785 [2024-12-11 10:10:31.160527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.785 [2024-12-11 10:10:31.173865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.785 [2024-12-11 10:10:31.173883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.785 [2024-12-11 10:10:31.188203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.785 [2024-12-11 10:10:31.188226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.785 [2024-12-11 10:10:31.199399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.785 [2024-12-11 10:10:31.199417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.785 [2024-12-11 10:10:31.214336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.785 [2024-12-11 10:10:31.214353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.785 [2024-12-11 10:10:31.228976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.786 [2024-12-11 10:10:31.228999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.786 [2024-12-11 10:10:31.244236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.786 [2024-12-11 10:10:31.244256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.786 [2024-12-11 10:10:31.257161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.786 [2024-12-11 10:10:31.257179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.786 [2024-12-11 10:10:31.269897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.786 [2024-12-11 10:10:31.269915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.786 [2024-12-11 10:10:31.284713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.786 [2024-12-11 10:10:31.284731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.786 [2024-12-11 10:10:31.299266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.786 [2024-12-11 10:10:31.299285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.786 16788.80 IOPS, 131.16 MiB/s 00:32:21.786 Latency(us) 00:32:21.786 [2024-12-11T09:10:31.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:21.786 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:21.786 Nvme1n1 : 5.01 16791.52 131.18 0.00 0.00 7616.10 2012.89 13544.11 00:32:21.786 [2024-12-11T09:10:31.361Z] =================================================================================================================== 00:32:21.786 [2024-12-11T09:10:31.361Z] Total : 16791.52 131.18 0.00 0.00 7616.10 2012.89 13544.11 00:32:21.786 [2024-12-11 10:10:31.308319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.786 [2024-12-11 10:10:31.308336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.786 [2024-12-11 10:10:31.320330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.786 [2024-12-11 10:10:31.320347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.786 [2024-12-11 10:10:31.332328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.786 [2024-12-11 10:10:31.332343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.786 [2024-12-11 10:10:31.344323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.786 [2024-12-11 10:10:31.344342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.786 [2024-12-11 10:10:31.356316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.786 [2024-12-11 10:10:31.356329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.045 [2024-12-11 10:10:31.368318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.045 [2024-12-11 10:10:31.368334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.045 [2024-12-11 10:10:31.380315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.045 [2024-12-11 10:10:31.380330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.045 [2024-12-11 10:10:31.392316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.045 [2024-12-11 10:10:31.392329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.045 [2024-12-11 10:10:31.404313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.045 [2024-12-11 10:10:31.404326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.045 [2024-12-11 10:10:31.416320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.045 [2024-12-11 10:10:31.416332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.045 [2024-12-11 10:10:31.428311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.045 [2024-12-11 10:10:31.428319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.045 [2024-12-11 10:10:31.440330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.045 [2024-12-11 10:10:31.440342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.045 [2024-12-11 10:10:31.452310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.045 [2024-12-11 10:10:31.452318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.045 [2024-12-11 10:10:31.464311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.045 [2024-12-11 10:10:31.464321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (312262) - No such process 00:32:22.045 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 312262 00:32:22.045 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:22.045 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.045 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:22.045 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.045 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:22.045 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.045 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:22.045 delay0 00:32:22.045 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.045 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:22.045 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.045 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:22.045 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.045 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:22.045 [2024-12-11 10:10:31.568648] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:30.160 Initializing NVMe Controllers 00:32:30.160 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:30.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:30.160 Initialization complete. Launching workers. 00:32:30.160 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3574 00:32:30.160 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3860, failed to submit 34 00:32:30.160 success 3745, unsuccessful 115, failed 0 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:30.160 rmmod nvme_tcp 00:32:30.160 rmmod nvme_fabrics 00:32:30.160 rmmod nvme_keyring 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 310621 ']' 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 310621 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 310621 ']' 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 310621 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 310621 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 310621' 00:32:30.160 killing process with pid 310621 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 310621 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 310621 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:32:30.160 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:30.161 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:30.161 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.161 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:30.161 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.097 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:31.097 00:32:31.097 real 0m32.798s 00:32:31.097 user 0m42.017s 00:32:31.097 sys 0m13.001s 00:32:31.097 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:31.097 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:31.097 ************************************ 00:32:31.097 END TEST nvmf_zcopy 00:32:31.097 ************************************ 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:31.357 ************************************ 00:32:31.357 START TEST nvmf_nmic 00:32:31.357 ************************************ 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:31.357 * Looking for test storage... 00:32:31.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:31.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.357 --rc genhtml_branch_coverage=1 00:32:31.357 --rc genhtml_function_coverage=1 00:32:31.357 --rc genhtml_legend=1 00:32:31.357 --rc geninfo_all_blocks=1 00:32:31.357 --rc geninfo_unexecuted_blocks=1 00:32:31.357 00:32:31.357 ' 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:31.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.357 --rc genhtml_branch_coverage=1 00:32:31.357 --rc genhtml_function_coverage=1 00:32:31.357 --rc genhtml_legend=1 00:32:31.357 --rc geninfo_all_blocks=1 00:32:31.357 --rc geninfo_unexecuted_blocks=1 00:32:31.357 00:32:31.357 ' 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:31.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.357 --rc genhtml_branch_coverage=1 00:32:31.357 --rc genhtml_function_coverage=1 00:32:31.357 --rc genhtml_legend=1 00:32:31.357 --rc geninfo_all_blocks=1 00:32:31.357 --rc geninfo_unexecuted_blocks=1 00:32:31.357 00:32:31.357 ' 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:31.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.357 --rc genhtml_branch_coverage=1 00:32:31.357 --rc genhtml_function_coverage=1 00:32:31.357 --rc genhtml_legend=1 00:32:31.357 --rc geninfo_all_blocks=1 00:32:31.357 --rc geninfo_unexecuted_blocks=1 00:32:31.357 00:32:31.357 ' 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.357 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.358 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.358 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:31.358 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.358 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:31.358 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:31.358 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:31.358 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:31.358 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:31.358 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:31.358 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:31.358 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:31.358 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:31.358 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:31.358 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:31.617 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:31.617 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:31.617 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:31.617 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:31.617 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:31.617 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:31.617 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:31.617 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:31.617 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.617 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:31.617 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.617 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:31.617 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:31.617 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:31.617 10:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:38.188 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:38.188 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:38.189 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:38.189 Found net devices under 0000:af:00.0: cvl_0_0 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:38.189 Found net devices under 0000:af:00.1: cvl_0_1 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:38.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:38.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:32:38.189 00:32:38.189 --- 10.0.0.2 ping statistics --- 00:32:38.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.189 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:38.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:38.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:32:38.189 00:32:38.189 --- 10.0.0.1 ping statistics --- 00:32:38.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.189 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=318104 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 318104 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 318104 ']' 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:38.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:38.189 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:38.189 [2024-12-11 10:10:47.588775] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:38.189 [2024-12-11 10:10:47.589680] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:32:38.189 [2024-12-11 10:10:47.589716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:38.189 [2024-12-11 10:10:47.677549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:38.189 [2024-12-11 10:10:47.718998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:38.189 [2024-12-11 10:10:47.719036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:38.189 [2024-12-11 10:10:47.719043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:38.189 [2024-12-11 10:10:47.719049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:38.189 [2024-12-11 10:10:47.719054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:38.189 [2024-12-11 10:10:47.720465] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.189 [2024-12-11 10:10:47.720581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:38.189 [2024-12-11 10:10:47.720711] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.189 [2024-12-11 10:10:47.720712] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:38.449 [2024-12-11 10:10:47.788668] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:38.449 [2024-12-11 10:10:47.789394] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:38.449 [2024-12-11 10:10:47.789748] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:38.449 [2024-12-11 10:10:47.790150] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:38.449 [2024-12-11 10:10:47.790192] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.017 [2024-12-11 10:10:48.473397] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.017 Malloc0 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.017 [2024-12-11 10:10:48.557693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:39.017 test case1: single bdev can't be used in multiple subsystems 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:39.017 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.018 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.018 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.018 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:39.018 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.018 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.018 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.018 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:39.018 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:39.018 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.018 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.018 [2024-12-11 10:10:48.589092] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:39.018 [2024-12-11 10:10:48.589116] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:39.018 [2024-12-11 10:10:48.589124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.277 request: 00:32:39.277 { 00:32:39.277 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:39.277 "namespace": { 00:32:39.277 "bdev_name": "Malloc0", 00:32:39.277 "no_auto_visible": false, 00:32:39.277 "hide_metadata": false 00:32:39.277 }, 00:32:39.277 "method": "nvmf_subsystem_add_ns", 00:32:39.277 "req_id": 1 00:32:39.277 } 00:32:39.277 Got JSON-RPC error response 00:32:39.277 response: 00:32:39.277 { 00:32:39.277 "code": -32602, 00:32:39.277 "message": "Invalid parameters" 00:32:39.277 } 00:32:39.277 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:39.277 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:39.277 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:39.277 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:39.277 Adding namespace failed - expected result. 00:32:39.277 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:39.277 test case2: host connect to nvmf target in multiple paths 00:32:39.277 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:39.277 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.277 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.277 [2024-12-11 10:10:48.601184] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:39.277 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.277 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:39.535 10:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:39.793 10:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:39.793 10:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:32:39.793 10:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:39.793 10:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:39.793 10:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:32:41.797 10:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:41.797 10:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:41.797 10:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:41.797 10:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:41.797 10:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:41.797 10:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:32:41.797 10:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:41.797 [global] 00:32:41.797 thread=1 00:32:41.797 invalidate=1 00:32:41.797 rw=write 00:32:41.797 time_based=1 00:32:41.797 runtime=1 00:32:41.797 ioengine=libaio 00:32:41.797 direct=1 00:32:41.797 bs=4096 00:32:41.797 iodepth=1 00:32:41.797 norandommap=0 00:32:41.797 numjobs=1 00:32:41.797 00:32:41.797 verify_dump=1 00:32:41.797 verify_backlog=512 00:32:41.797 verify_state_save=0 00:32:41.797 do_verify=1 00:32:41.797 verify=crc32c-intel 00:32:41.797 [job0] 00:32:41.797 filename=/dev/nvme0n1 00:32:41.797 Could not set queue depth (nvme0n1) 00:32:42.055 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:42.055 fio-3.35 00:32:42.055 Starting 1 thread 00:32:43.432 00:32:43.432 job0: (groupid=0, jobs=1): err= 0: pid=318892: Wed Dec 11 10:10:52 2024 00:32:43.432 read: IOPS=22, BW=90.2KiB/s (92.4kB/s)(92.0KiB/1020msec) 00:32:43.432 slat (nsec): min=9473, max=28700, avg=21825.30, stdev=3104.85 00:32:43.432 clat (usec): min=40632, max=41922, avg=40996.47, stdev=222.24 00:32:43.432 lat (usec): min=40642, max=41951, avg=41018.30, stdev=224.39 00:32:43.432 clat percentiles (usec): 00:32:43.432 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:43.432 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:43.432 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:43.432 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:32:43.432 | 99.99th=[41681] 00:32:43.432 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:32:43.432 slat (nsec): min=3743, max=24411, avg=9187.52, stdev=2576.44 00:32:43.432 clat (usec): min=116, max=292, avg=137.70, stdev=11.80 00:32:43.432 lat (usec): min=120, max=309, avg=146.89, stdev=13.50 00:32:43.432 clat percentiles (usec): 00:32:43.432 | 1.00th=[ 119], 5.00th=[ 124], 10.00th=[ 126], 20.00th=[ 129], 00:32:43.432 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:32:43.432 | 70.00th=[ 143], 80.00th=[ 143], 90.00th=[ 147], 95.00th=[ 151], 00:32:43.432 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 293], 99.95th=[ 293], 00:32:43.432 | 99.99th=[ 293] 00:32:43.432 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:32:43.432 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:43.432 lat (usec) : 250=95.51%, 500=0.19% 00:32:43.432 lat (msec) : 50=4.30% 00:32:43.432 cpu : usr=0.39%, sys=0.69%, ctx=535, majf=0, minf=1 00:32:43.432 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:43.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.432 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.432 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:43.432 00:32:43.432 Run status group 0 (all jobs): 00:32:43.432 READ: bw=90.2KiB/s (92.4kB/s), 90.2KiB/s-90.2KiB/s (92.4kB/s-92.4kB/s), io=92.0KiB (94.2kB), run=1020-1020msec 00:32:43.432 WRITE: bw=2008KiB/s (2056kB/s), 2008KiB/s-2008KiB/s (2056kB/s-2056kB/s), io=2048KiB (2097kB), run=1020-1020msec 00:32:43.432 00:32:43.432 Disk stats (read/write): 00:32:43.432 nvme0n1: ios=70/512, merge=0/0, ticks=838/65, in_queue=903, util=91.18% 00:32:43.432 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:43.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:43.432 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:43.432 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:32:43.432 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:43.433 rmmod nvme_tcp 00:32:43.433 rmmod nvme_fabrics 00:32:43.433 rmmod nvme_keyring 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 318104 ']' 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 318104 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 318104 ']' 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 318104 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 318104 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 318104' 00:32:43.433 killing process with pid 318104 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 318104 00:32:43.433 10:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 318104 00:32:43.692 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:43.692 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:43.692 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:43.692 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:43.692 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:32:43.692 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:43.692 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:32:43.692 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:43.692 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:43.692 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.692 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:43.692 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:46.227 00:32:46.227 real 0m14.476s 00:32:46.227 user 0m24.759s 00:32:46.227 sys 0m6.611s 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:46.227 ************************************ 00:32:46.227 END TEST nvmf_nmic 00:32:46.227 ************************************ 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:46.227 ************************************ 00:32:46.227 START TEST nvmf_fio_target 00:32:46.227 ************************************ 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:46.227 * Looking for test storage... 00:32:46.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:46.227 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:46.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.227 --rc genhtml_branch_coverage=1 00:32:46.227 --rc genhtml_function_coverage=1 00:32:46.227 --rc genhtml_legend=1 00:32:46.227 --rc geninfo_all_blocks=1 00:32:46.228 --rc geninfo_unexecuted_blocks=1 00:32:46.228 00:32:46.228 ' 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:46.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.228 --rc genhtml_branch_coverage=1 00:32:46.228 --rc genhtml_function_coverage=1 00:32:46.228 --rc genhtml_legend=1 00:32:46.228 --rc geninfo_all_blocks=1 00:32:46.228 --rc geninfo_unexecuted_blocks=1 00:32:46.228 00:32:46.228 ' 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:46.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.228 --rc genhtml_branch_coverage=1 00:32:46.228 --rc genhtml_function_coverage=1 00:32:46.228 --rc genhtml_legend=1 00:32:46.228 --rc geninfo_all_blocks=1 00:32:46.228 --rc geninfo_unexecuted_blocks=1 00:32:46.228 00:32:46.228 ' 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:46.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.228 --rc genhtml_branch_coverage=1 00:32:46.228 --rc genhtml_function_coverage=1 00:32:46.228 --rc genhtml_legend=1 00:32:46.228 --rc geninfo_all_blocks=1 00:32:46.228 --rc geninfo_unexecuted_blocks=1 00:32:46.228 00:32:46.228 ' 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:46.228 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:52.799 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:52.799 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:52.799 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:52.800 Found net devices under 0000:af:00.0: cvl_0_0 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:52.800 Found net devices under 0000:af:00.1: cvl_0_1 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:52.800 10:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:52.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:52.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:32:52.800 00:32:52.800 --- 10.0.0.2 ping statistics --- 00:32:52.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.800 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:52.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:52.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:32:52.800 00:32:52.800 --- 10.0.0.1 ping statistics --- 00:32:52.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.800 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=323010 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 323010 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 323010 ']' 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:52.800 10:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.800 [2024-12-11 10:11:02.280424] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:52.800 [2024-12-11 10:11:02.281383] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:32:52.800 [2024-12-11 10:11:02.281422] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:52.800 [2024-12-11 10:11:02.364818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:53.060 [2024-12-11 10:11:02.404849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:53.060 [2024-12-11 10:11:02.404885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:53.060 [2024-12-11 10:11:02.404892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:53.060 [2024-12-11 10:11:02.404898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:53.060 [2024-12-11 10:11:02.404903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:53.060 [2024-12-11 10:11:02.406475] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.060 [2024-12-11 10:11:02.406584] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:53.060 [2024-12-11 10:11:02.406689] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.060 [2024-12-11 10:11:02.406690] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:53.060 [2024-12-11 10:11:02.474841] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:53.060 [2024-12-11 10:11:02.476031] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:53.060 [2024-12-11 10:11:02.476043] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:53.060 [2024-12-11 10:11:02.476521] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:53.060 [2024-12-11 10:11:02.476543] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:53.628 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:53.628 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:53.628 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:53.628 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:53.628 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:53.628 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:53.628 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:53.886 [2024-12-11 10:11:03.327362] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:53.886 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:54.145 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:54.145 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:54.404 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:54.404 10:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:54.663 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:54.663 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:54.663 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:54.663 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:54.921 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:55.180 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:55.180 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:55.438 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:55.438 10:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:55.438 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:55.438 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:55.696 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:55.954 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:55.954 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:56.212 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:56.212 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:56.212 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:56.470 [2024-12-11 10:11:05.939282] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:56.470 10:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:56.728 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:56.986 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:57.244 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:57.244 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:57.244 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:57.244 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:57.244 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:57.244 10:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:59.144 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:59.144 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:59.144 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:59.144 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:59.144 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:59.144 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:59.144 10:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:59.144 [global] 00:32:59.144 thread=1 00:32:59.144 invalidate=1 00:32:59.144 rw=write 00:32:59.144 time_based=1 00:32:59.144 runtime=1 00:32:59.144 ioengine=libaio 00:32:59.144 direct=1 00:32:59.144 bs=4096 00:32:59.144 iodepth=1 00:32:59.144 norandommap=0 00:32:59.144 numjobs=1 00:32:59.144 00:32:59.144 verify_dump=1 00:32:59.144 verify_backlog=512 00:32:59.144 verify_state_save=0 00:32:59.144 do_verify=1 00:32:59.144 verify=crc32c-intel 00:32:59.144 [job0] 00:32:59.144 filename=/dev/nvme0n1 00:32:59.144 [job1] 00:32:59.144 filename=/dev/nvme0n2 00:32:59.144 [job2] 00:32:59.144 filename=/dev/nvme0n3 00:32:59.144 [job3] 00:32:59.144 filename=/dev/nvme0n4 00:32:59.427 Could not set queue depth (nvme0n1) 00:32:59.427 Could not set queue depth (nvme0n2) 00:32:59.427 Could not set queue depth (nvme0n3) 00:32:59.427 Could not set queue depth (nvme0n4) 00:32:59.684 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:59.684 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:59.684 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:59.684 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:59.684 fio-3.35 00:32:59.684 Starting 4 threads 00:33:01.053 00:33:01.053 job0: (groupid=0, jobs=1): err= 0: pid=324223: Wed Dec 11 10:11:10 2024 00:33:01.053 read: IOPS=1979, BW=7919KiB/s (8109kB/s)(8196KiB/1035msec) 00:33:01.053 slat (nsec): min=6897, max=30750, avg=10000.15, stdev=1811.54 00:33:01.053 clat (usec): min=171, max=41212, avg=256.76, stdev=905.74 00:33:01.053 lat (usec): min=179, max=41223, avg=266.76, stdev=905.77 00:33:01.053 clat percentiles (usec): 00:33:01.054 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 210], 20.00th=[ 227], 00:33:01.054 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 243], 00:33:01.054 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 260], 00:33:01.054 | 99.00th=[ 289], 99.50th=[ 330], 99.90th=[ 383], 99.95th=[ 1205], 00:33:01.054 | 99.99th=[41157] 00:33:01.054 write: IOPS=2473, BW=9894KiB/s (10.1MB/s)(10.0MiB/1035msec); 0 zone resets 00:33:01.054 slat (nsec): min=9617, max=48267, avg=13260.06, stdev=2408.84 00:33:01.054 clat (usec): min=122, max=341, avg=171.33, stdev=21.72 00:33:01.054 lat (usec): min=134, max=373, avg=184.59, stdev=22.67 00:33:01.054 clat percentiles (usec): 00:33:01.054 | 1.00th=[ 128], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 159], 00:33:01.054 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:33:01.054 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 202], 00:33:01.054 | 99.00th=[ 227], 99.50th=[ 247], 99.90th=[ 306], 99.95th=[ 334], 00:33:01.054 | 99.99th=[ 343] 00:33:01.054 bw ( KiB/s): min= 9700, max=10760, per=47.00%, avg=10230.00, stdev=749.53, samples=2 00:33:01.054 iops : min= 2425, max= 2690, avg=2557.50, stdev=187.38, samples=2 00:33:01.054 lat (usec) : 250=91.78%, 500=8.18% 00:33:01.054 lat (msec) : 2=0.02%, 50=0.02% 00:33:01.054 cpu : usr=3.77%, sys=8.22%, ctx=4609, majf=0, minf=1 00:33:01.054 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.054 issued rwts: total=2049,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.054 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:01.054 job1: (groupid=0, jobs=1): err= 0: pid=324224: Wed Dec 11 10:11:10 2024 00:33:01.054 read: IOPS=129, BW=517KiB/s (529kB/s)(532KiB/1029msec) 00:33:01.054 slat (nsec): min=7230, max=27660, avg=10712.68, stdev=5820.98 00:33:01.054 clat (usec): min=221, max=42657, avg=7031.37, stdev=15228.30 00:33:01.054 lat (usec): min=228, max=42679, avg=7042.08, stdev=15232.71 00:33:01.054 clat percentiles (usec): 00:33:01.054 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:33:01.054 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 265], 00:33:01.054 | 70.00th=[ 281], 80.00th=[ 351], 90.00th=[41157], 95.00th=[41157], 00:33:01.054 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:01.054 | 99.99th=[42730] 00:33:01.054 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:33:01.054 slat (nsec): min=9854, max=43760, avg=11052.93, stdev=1862.93 00:33:01.054 clat (usec): min=136, max=254, avg=164.64, stdev=19.07 00:33:01.054 lat (usec): min=146, max=265, avg=175.70, stdev=19.19 00:33:01.054 clat percentiles (usec): 00:33:01.054 | 1.00th=[ 145], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:33:01.054 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:33:01.054 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 180], 95.00th=[ 188], 00:33:01.054 | 99.00th=[ 249], 99.50th=[ 251], 99.90th=[ 255], 99.95th=[ 255], 00:33:01.054 | 99.99th=[ 255] 00:33:01.054 bw ( KiB/s): min= 4087, max= 4087, per=18.78%, avg=4087.00, stdev= 0.00, samples=1 00:33:01.054 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:33:01.054 lat (usec) : 250=87.75%, 500=8.68% 00:33:01.054 lat (msec) : 4=0.16%, 50=3.41% 00:33:01.054 cpu : usr=0.19%, sys=0.78%, ctx=648, majf=0, minf=1 00:33:01.054 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.054 issued rwts: total=133,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.054 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:01.054 job2: (groupid=0, jobs=1): err= 0: pid=324225: Wed Dec 11 10:11:10 2024 00:33:01.054 read: IOPS=1748, BW=6992KiB/s (7160kB/s)(7020KiB/1004msec) 00:33:01.054 slat (nsec): min=6543, max=26475, avg=7504.82, stdev=1208.80 00:33:01.054 clat (usec): min=210, max=41030, avg=334.42, stdev=1834.19 00:33:01.054 lat (usec): min=219, max=41044, avg=341.92, stdev=1834.68 00:33:01.054 clat percentiles (usec): 00:33:01.054 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:33:01.054 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:33:01.054 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 260], 95.00th=[ 265], 00:33:01.054 | 99.00th=[ 293], 99.50th=[ 420], 99.90th=[41157], 99.95th=[41157], 00:33:01.054 | 99.99th=[41157] 00:33:01.054 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:33:01.054 slat (nsec): min=9403, max=36058, avg=10646.36, stdev=1076.72 00:33:01.054 clat (usec): min=142, max=2513, avg=181.48, stdev=53.53 00:33:01.054 lat (usec): min=152, max=2523, avg=192.12, stdev=53.54 00:33:01.054 clat percentiles (usec): 00:33:01.054 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 172], 00:33:01.054 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:33:01.054 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 210], 00:33:01.054 | 99.00th=[ 227], 99.50th=[ 231], 99.90th=[ 245], 99.95th=[ 258], 00:33:01.054 | 99.99th=[ 2507] 00:33:01.054 bw ( KiB/s): min= 6832, max= 9532, per=37.59%, avg=8182.00, stdev=1909.19, samples=2 00:33:01.054 iops : min= 1708, max= 2383, avg=2045.50, stdev=477.30, samples=2 00:33:01.054 lat (usec) : 250=84.14%, 500=15.70%, 750=0.03% 00:33:01.054 lat (msec) : 4=0.03%, 50=0.11% 00:33:01.054 cpu : usr=1.89%, sys=3.79%, ctx=3803, majf=0, minf=1 00:33:01.054 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.054 issued rwts: total=1755,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.054 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:01.054 job3: (groupid=0, jobs=1): err= 0: pid=324226: Wed Dec 11 10:11:10 2024 00:33:01.054 read: IOPS=22, BW=91.1KiB/s (93.3kB/s)(92.0KiB/1010msec) 00:33:01.054 slat (nsec): min=7056, max=24550, avg=14526.43, stdev=6196.13 00:33:01.054 clat (usec): min=398, max=42011, avg=39294.63, stdev=8485.04 00:33:01.054 lat (usec): min=413, max=42023, avg=39309.15, stdev=8484.83 00:33:01.054 clat percentiles (usec): 00:33:01.054 | 1.00th=[ 400], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:33:01.054 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:01.054 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:33:01.054 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:01.054 | 99.99th=[42206] 00:33:01.054 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:33:01.054 slat (nsec): min=9532, max=38476, avg=11120.36, stdev=1992.94 00:33:01.054 clat (usec): min=139, max=363, avg=192.04, stdev=24.27 00:33:01.054 lat (usec): min=151, max=401, avg=203.16, stdev=24.82 00:33:01.054 clat percentiles (usec): 00:33:01.054 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 174], 00:33:01.054 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 198], 00:33:01.054 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 235], 00:33:01.054 | 99.00th=[ 251], 99.50th=[ 253], 99.90th=[ 363], 99.95th=[ 363], 00:33:01.054 | 99.99th=[ 363] 00:33:01.054 bw ( KiB/s): min= 4087, max= 4087, per=18.78%, avg=4087.00, stdev= 0.00, samples=1 00:33:01.054 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:33:01.054 lat (usec) : 250=94.02%, 500=1.87% 00:33:01.054 lat (msec) : 50=4.11% 00:33:01.054 cpu : usr=0.50%, sys=0.30%, ctx=535, majf=0, minf=2 00:33:01.054 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.054 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.054 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:01.054 00:33:01.054 Run status group 0 (all jobs): 00:33:01.054 READ: bw=14.9MiB/s (15.7MB/s), 91.1KiB/s-7919KiB/s (93.3kB/s-8109kB/s), io=15.5MiB (16.2MB), run=1004-1035msec 00:33:01.054 WRITE: bw=21.3MiB/s (22.3MB/s), 1990KiB/s-9894KiB/s (2038kB/s-10.1MB/s), io=22.0MiB (23.1MB), run=1004-1035msec 00:33:01.054 00:33:01.054 Disk stats (read/write): 00:33:01.054 nvme0n1: ios=1745/2048, merge=0/0, ticks=419/325, in_queue=744, util=85.07% 00:33:01.054 nvme0n2: ios=75/512, merge=0/0, ticks=1459/76, in_queue=1535, util=96.30% 00:33:01.054 nvme0n3: ios=1685/2048, merge=0/0, ticks=406/365, in_queue=771, util=88.55% 00:33:01.054 nvme0n4: ios=18/512, merge=0/0, ticks=698/95, in_queue=793, util=89.40% 00:33:01.054 10:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:33:01.054 [global] 00:33:01.054 thread=1 00:33:01.054 invalidate=1 00:33:01.054 rw=randwrite 00:33:01.054 time_based=1 00:33:01.054 runtime=1 00:33:01.054 ioengine=libaio 00:33:01.054 direct=1 00:33:01.054 bs=4096 00:33:01.054 iodepth=1 00:33:01.054 norandommap=0 00:33:01.054 numjobs=1 00:33:01.054 00:33:01.054 verify_dump=1 00:33:01.054 verify_backlog=512 00:33:01.054 verify_state_save=0 00:33:01.054 do_verify=1 00:33:01.054 verify=crc32c-intel 00:33:01.054 [job0] 00:33:01.054 filename=/dev/nvme0n1 00:33:01.054 [job1] 00:33:01.054 filename=/dev/nvme0n2 00:33:01.054 [job2] 00:33:01.054 filename=/dev/nvme0n3 00:33:01.054 [job3] 00:33:01.054 filename=/dev/nvme0n4 00:33:01.054 Could not set queue depth (nvme0n1) 00:33:01.054 Could not set queue depth (nvme0n2) 00:33:01.054 Could not set queue depth (nvme0n3) 00:33:01.054 Could not set queue depth (nvme0n4) 00:33:01.311 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:01.311 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:01.311 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:01.311 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:01.311 fio-3.35 00:33:01.311 Starting 4 threads 00:33:02.680 00:33:02.680 job0: (groupid=0, jobs=1): err= 0: pid=324597: Wed Dec 11 10:11:11 2024 00:33:02.680 read: IOPS=22, BW=91.8KiB/s (94.0kB/s)(92.0KiB/1002msec) 00:33:02.680 slat (nsec): min=10110, max=32791, avg=22694.04, stdev=4415.27 00:33:02.680 clat (usec): min=304, max=41336, avg=39202.93, stdev=8480.36 00:33:02.680 lat (usec): min=327, max=41346, avg=39225.62, stdev=8480.30 00:33:02.680 clat percentiles (usec): 00:33:02.680 | 1.00th=[ 306], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:02.680 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:02.680 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:02.680 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:02.680 | 99.99th=[41157] 00:33:02.680 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:33:02.680 slat (nsec): min=10431, max=39661, avg=12485.72, stdev=2664.97 00:33:02.680 clat (usec): min=146, max=342, avg=178.03, stdev=22.78 00:33:02.680 lat (usec): min=157, max=372, avg=190.52, stdev=23.85 00:33:02.680 clat percentiles (usec): 00:33:02.680 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:33:02.680 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:33:02.680 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 223], 00:33:02.680 | 99.00th=[ 269], 99.50th=[ 330], 99.90th=[ 343], 99.95th=[ 343], 00:33:02.680 | 99.99th=[ 343] 00:33:02.680 bw ( KiB/s): min= 4096, max= 4096, per=22.51%, avg=4096.00, stdev= 0.00, samples=1 00:33:02.680 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:02.680 lat (usec) : 250=94.58%, 500=1.31% 00:33:02.680 lat (msec) : 50=4.11% 00:33:02.680 cpu : usr=0.90%, sys=0.60%, ctx=537, majf=0, minf=1 00:33:02.680 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.680 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.680 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:02.680 job1: (groupid=0, jobs=1): err= 0: pid=324604: Wed Dec 11 10:11:11 2024 00:33:02.680 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:33:02.680 slat (nsec): min=2303, max=38548, avg=6879.82, stdev=2726.02 00:33:02.680 clat (usec): min=178, max=804, avg=226.90, stdev=26.66 00:33:02.680 lat (usec): min=182, max=812, avg=233.78, stdev=27.78 00:33:02.680 clat percentiles (usec): 00:33:02.680 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 202], 00:33:02.680 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 233], 60.00th=[ 243], 00:33:02.680 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 251], 95.00th=[ 253], 00:33:02.680 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 490], 99.95th=[ 510], 00:33:02.680 | 99.99th=[ 807] 00:33:02.680 write: IOPS=2621, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec); 0 zone resets 00:33:02.680 slat (nsec): min=3300, max=41583, avg=8783.55, stdev=3922.18 00:33:02.680 clat (usec): min=113, max=329, avg=139.56, stdev=15.73 00:33:02.680 lat (usec): min=117, max=371, avg=148.34, stdev=16.35 00:33:02.680 clat percentiles (usec): 00:33:02.680 | 1.00th=[ 119], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 131], 00:33:02.680 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:33:02.680 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 169], 00:33:02.680 | 99.00th=[ 198], 99.50th=[ 243], 99.90th=[ 249], 99.95th=[ 253], 00:33:02.680 | 99.99th=[ 330] 00:33:02.680 bw ( KiB/s): min=12288, max=12288, per=67.53%, avg=12288.00, stdev= 0.00, samples=1 00:33:02.680 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:33:02.680 lat (usec) : 250=94.25%, 500=5.71%, 750=0.02%, 1000=0.02% 00:33:02.680 cpu : usr=2.70%, sys=7.20%, ctx=5186, majf=0, minf=1 00:33:02.680 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.680 issued rwts: total=2560,2624,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.680 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:02.680 job2: (groupid=0, jobs=1): err= 0: pid=324613: Wed Dec 11 10:11:11 2024 00:33:02.680 read: IOPS=515, BW=2060KiB/s (2110kB/s)(2116KiB/1027msec) 00:33:02.680 slat (nsec): min=7684, max=24363, avg=9112.19, stdev=2780.58 00:33:02.680 clat (usec): min=216, max=41074, avg=1554.39, stdev=7189.62 00:33:02.680 lat (usec): min=225, max=41098, avg=1563.50, stdev=7192.18 00:33:02.680 clat percentiles (usec): 00:33:02.680 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 241], 00:33:02.680 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:33:02.680 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 260], 00:33:02.680 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:02.680 | 99.99th=[41157] 00:33:02.680 write: IOPS=997, BW=3988KiB/s (4084kB/s)(4096KiB/1027msec); 0 zone resets 00:33:02.680 slat (nsec): min=9544, max=47281, avg=11846.34, stdev=2356.44 00:33:02.680 clat (usec): min=138, max=1895, avg=178.60, stdev=80.77 00:33:02.680 lat (usec): min=148, max=1909, avg=190.45, stdev=81.22 00:33:02.680 clat percentiles (usec): 00:33:02.680 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:33:02.680 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:33:02.680 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 200], 95.00th=[ 225], 00:33:02.680 | 99.00th=[ 262], 99.50th=[ 515], 99.90th=[ 1762], 99.95th=[ 1893], 00:33:02.680 | 99.99th=[ 1893] 00:33:02.680 bw ( KiB/s): min= 8192, max= 8192, per=45.02%, avg=8192.00, stdev= 0.00, samples=1 00:33:02.680 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:33:02.680 lat (usec) : 250=90.73%, 500=7.79%, 750=0.26% 00:33:02.680 lat (msec) : 2=0.13%, 50=1.09% 00:33:02.680 cpu : usr=0.78%, sys=2.73%, ctx=1554, majf=0, minf=1 00:33:02.680 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.680 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.680 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:02.680 job3: (groupid=0, jobs=1): err= 0: pid=324619: Wed Dec 11 10:11:11 2024 00:33:02.680 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:33:02.680 slat (nsec): min=9995, max=25161, avg=22730.55, stdev=2918.35 00:33:02.680 clat (usec): min=40862, max=41089, avg=40966.68, stdev=46.89 00:33:02.680 lat (usec): min=40887, max=41114, avg=40989.41, stdev=47.76 00:33:02.680 clat percentiles (usec): 00:33:02.680 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:02.680 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:02.680 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:02.680 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:02.680 | 99.99th=[41157] 00:33:02.680 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:33:02.680 slat (nsec): min=10766, max=38754, avg=12280.43, stdev=2351.88 00:33:02.680 clat (usec): min=157, max=324, avg=188.81, stdev=14.36 00:33:02.680 lat (usec): min=169, max=363, avg=201.09, stdev=15.14 00:33:02.680 clat percentiles (usec): 00:33:02.680 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 178], 00:33:02.680 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:33:02.680 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 212], 00:33:02.680 | 99.00th=[ 225], 99.50th=[ 233], 99.90th=[ 326], 99.95th=[ 326], 00:33:02.680 | 99.99th=[ 326] 00:33:02.680 bw ( KiB/s): min= 4096, max= 4096, per=22.51%, avg=4096.00, stdev= 0.00, samples=1 00:33:02.680 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:02.680 lat (usec) : 250=95.69%, 500=0.19% 00:33:02.680 lat (msec) : 50=4.12% 00:33:02.680 cpu : usr=0.40%, sys=0.99%, ctx=536, majf=0, minf=1 00:33:02.680 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.680 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.680 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:02.680 00:33:02.680 Run status group 0 (all jobs): 00:33:02.680 READ: bw=11.9MiB/s (12.5MB/s), 87.4KiB/s-9.99MiB/s (89.5kB/s-10.5MB/s), io=12.2MiB (12.8MB), run=1001-1027msec 00:33:02.680 WRITE: bw=17.8MiB/s (18.6MB/s), 2034KiB/s-10.2MiB/s (2083kB/s-10.7MB/s), io=18.2MiB (19.1MB), run=1001-1027msec 00:33:02.680 00:33:02.680 Disk stats (read/write): 00:33:02.681 nvme0n1: ios=70/512, merge=0/0, ticks=1566/86, in_queue=1652, util=93.19% 00:33:02.681 nvme0n2: ios=2087/2345, merge=0/0, ticks=627/311, in_queue=938, util=96.44% 00:33:02.681 nvme0n3: ios=559/1024, merge=0/0, ticks=939/175, in_queue=1114, util=99.58% 00:33:02.681 nvme0n4: ios=75/512, merge=0/0, ticks=1711/89, in_queue=1800, util=97.14% 00:33:02.681 10:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:02.681 [global] 00:33:02.681 thread=1 00:33:02.681 invalidate=1 00:33:02.681 rw=write 00:33:02.681 time_based=1 00:33:02.681 runtime=1 00:33:02.681 ioengine=libaio 00:33:02.681 direct=1 00:33:02.681 bs=4096 00:33:02.681 iodepth=128 00:33:02.681 norandommap=0 00:33:02.681 numjobs=1 00:33:02.681 00:33:02.681 verify_dump=1 00:33:02.681 verify_backlog=512 00:33:02.681 verify_state_save=0 00:33:02.681 do_verify=1 00:33:02.681 verify=crc32c-intel 00:33:02.681 [job0] 00:33:02.681 filename=/dev/nvme0n1 00:33:02.681 [job1] 00:33:02.681 filename=/dev/nvme0n2 00:33:02.681 [job2] 00:33:02.681 filename=/dev/nvme0n3 00:33:02.681 [job3] 00:33:02.681 filename=/dev/nvme0n4 00:33:02.681 Could not set queue depth (nvme0n1) 00:33:02.681 Could not set queue depth (nvme0n2) 00:33:02.681 Could not set queue depth (nvme0n3) 00:33:02.681 Could not set queue depth (nvme0n4) 00:33:02.681 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:02.681 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:02.681 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:02.681 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:02.681 fio-3.35 00:33:02.681 Starting 4 threads 00:33:04.050 00:33:04.050 job0: (groupid=0, jobs=1): err= 0: pid=325025: Wed Dec 11 10:11:13 2024 00:33:04.050 read: IOPS=6387, BW=25.0MiB/s (26.2MB/s)(25.1MiB/1004msec) 00:33:04.050 slat (nsec): min=1283, max=9029.6k, avg=66938.31, stdev=564317.13 00:33:04.050 clat (usec): min=2541, max=19103, avg=8879.78, stdev=2674.31 00:33:04.050 lat (usec): min=2549, max=24737, avg=8946.72, stdev=2722.54 00:33:04.050 clat percentiles (usec): 00:33:04.050 | 1.00th=[ 4359], 5.00th=[ 5080], 10.00th=[ 6587], 20.00th=[ 7046], 00:33:04.050 | 30.00th=[ 7308], 40.00th=[ 7635], 50.00th=[ 8029], 60.00th=[ 8848], 00:33:04.050 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[12911], 95.00th=[14484], 00:33:04.050 | 99.00th=[16909], 99.50th=[17695], 99.90th=[18482], 99.95th=[18744], 00:33:04.050 | 99.99th=[19006] 00:33:04.050 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:33:04.050 slat (usec): min=2, max=11601, avg=68.57, stdev=500.66 00:33:04.050 clat (usec): min=895, max=89647, avg=9830.45, stdev=10378.33 00:33:04.050 lat (usec): min=906, max=89658, avg=9899.02, stdev=10451.97 00:33:04.050 clat percentiles (usec): 00:33:04.050 | 1.00th=[ 2999], 5.00th=[ 4490], 10.00th=[ 4883], 20.00th=[ 6456], 00:33:04.050 | 30.00th=[ 7111], 40.00th=[ 7373], 50.00th=[ 7701], 60.00th=[ 8094], 00:33:04.050 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[10552], 95.00th=[18744], 00:33:04.050 | 99.00th=[68682], 99.50th=[78119], 99.90th=[86508], 99.95th=[89654], 00:33:04.050 | 99.99th=[89654] 00:33:04.050 bw ( KiB/s): min=28672, max=28672, per=42.37%, avg=28672.00, stdev= 0.00, samples=2 00:33:04.050 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=2 00:33:04.050 lat (usec) : 1000=0.03% 00:33:04.050 lat (msec) : 2=0.06%, 4=1.46%, 10=73.68%, 20=22.38%, 50=1.06% 00:33:04.050 lat (msec) : 100=1.33% 00:33:04.050 cpu : usr=5.38%, sys=7.28%, ctx=512, majf=0, minf=1 00:33:04.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:33:04.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:04.050 issued rwts: total=6413,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:04.050 job1: (groupid=0, jobs=1): err= 0: pid=325038: Wed Dec 11 10:11:13 2024 00:33:04.050 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:33:04.050 slat (nsec): min=1326, max=13343k, avg=108113.36, stdev=704857.44 00:33:04.050 clat (usec): min=5599, max=54214, avg=14296.57, stdev=6706.01 00:33:04.050 lat (usec): min=5610, max=54240, avg=14404.69, stdev=6780.64 00:33:04.050 clat percentiles (usec): 00:33:04.050 | 1.00th=[ 7767], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10421], 00:33:04.050 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11994], 60.00th=[12780], 00:33:04.050 | 70.00th=[13566], 80.00th=[16057], 90.00th=[21103], 95.00th=[30540], 00:33:04.050 | 99.00th=[40633], 99.50th=[44827], 99.90th=[44827], 99.95th=[45876], 00:33:04.050 | 99.99th=[54264] 00:33:04.050 write: IOPS=4247, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1009msec); 0 zone resets 00:33:04.050 slat (usec): min=2, max=12878, avg=124.05, stdev=760.72 00:33:04.050 clat (usec): min=5470, max=57924, avg=15829.71, stdev=9990.44 00:33:04.050 lat (usec): min=5475, max=57938, avg=15953.76, stdev=10064.96 00:33:04.050 clat percentiles (usec): 00:33:04.051 | 1.00th=[ 7701], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:33:04.051 | 30.00th=[10290], 40.00th=[11207], 50.00th=[12125], 60.00th=[12780], 00:33:04.051 | 70.00th=[14091], 80.00th=[20055], 90.00th=[26608], 95.00th=[41681], 00:33:04.051 | 99.00th=[57410], 99.50th=[57410], 99.90th=[57934], 99.95th=[57934], 00:33:04.051 | 99.99th=[57934] 00:33:04.051 bw ( KiB/s): min=12424, max=20840, per=24.58%, avg=16632.00, stdev=5951.01, samples=2 00:33:04.051 iops : min= 3106, max= 5210, avg=4158.00, stdev=1487.75, samples=2 00:33:04.051 lat (msec) : 10=15.97%, 20=68.59%, 50=14.17%, 100=1.26% 00:33:04.051 cpu : usr=3.87%, sys=5.06%, ctx=366, majf=0, minf=1 00:33:04.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:04.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:04.051 issued rwts: total=4096,4286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:04.051 job2: (groupid=0, jobs=1): err= 0: pid=325054: Wed Dec 11 10:11:13 2024 00:33:04.051 read: IOPS=2646, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1009msec) 00:33:04.051 slat (nsec): min=1305, max=12860k, avg=147537.68, stdev=909577.77 00:33:04.051 clat (usec): min=5846, max=67408, avg=15788.53, stdev=9156.16 00:33:04.051 lat (usec): min=5853, max=67412, avg=15936.07, stdev=9259.29 00:33:04.051 clat percentiles (usec): 00:33:04.051 | 1.00th=[ 6128], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[10945], 00:33:04.051 | 30.00th=[12387], 40.00th=[13042], 50.00th=[13435], 60.00th=[13829], 00:33:04.051 | 70.00th=[14353], 80.00th=[17171], 90.00th=[23987], 95.00th=[34866], 00:33:04.051 | 99.00th=[59507], 99.50th=[60556], 99.90th=[67634], 99.95th=[67634], 00:33:04.051 | 99.99th=[67634] 00:33:04.051 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:33:04.051 slat (usec): min=3, max=10848, avg=188.18, stdev=826.34 00:33:04.051 clat (usec): min=1669, max=73426, avg=27891.88, stdev=17797.31 00:33:04.051 lat (usec): min=1678, max=73440, avg=28080.05, stdev=17915.62 00:33:04.051 clat percentiles (usec): 00:33:04.051 | 1.00th=[ 4424], 5.00th=[ 7832], 10.00th=[ 8586], 20.00th=[10290], 00:33:04.051 | 30.00th=[11731], 40.00th=[15533], 50.00th=[23200], 60.00th=[31851], 00:33:04.051 | 70.00th=[41681], 80.00th=[49546], 90.00th=[52167], 95.00th=[55837], 00:33:04.051 | 99.00th=[65799], 99.50th=[67634], 99.90th=[72877], 99.95th=[73925], 00:33:04.051 | 99.99th=[73925] 00:33:04.051 bw ( KiB/s): min= 9408, max=15024, per=18.05%, avg=12216.00, stdev=3971.11, samples=2 00:33:04.051 iops : min= 2352, max= 3756, avg=3054.00, stdev=992.78, samples=2 00:33:04.051 lat (msec) : 2=0.09%, 4=0.31%, 10=15.01%, 20=48.08%, 50=26.16% 00:33:04.051 lat (msec) : 100=10.34% 00:33:04.051 cpu : usr=2.88%, sys=3.57%, ctx=320, majf=0, minf=1 00:33:04.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:33:04.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:04.051 issued rwts: total=2670,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:04.051 job3: (groupid=0, jobs=1): err= 0: pid=325061: Wed Dec 11 10:11:13 2024 00:33:04.051 read: IOPS=2339, BW=9358KiB/s (9583kB/s)(9452KiB/1010msec) 00:33:04.051 slat (nsec): min=1525, max=12755k, avg=134514.00, stdev=749982.64 00:33:04.051 clat (usec): min=6622, max=45924, avg=15429.97, stdev=5774.18 00:33:04.051 lat (usec): min=7681, max=46207, avg=15564.49, stdev=5846.38 00:33:04.051 clat percentiles (usec): 00:33:04.051 | 1.00th=[ 8094], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[11207], 00:33:04.051 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14091], 60.00th=[14484], 00:33:04.051 | 70.00th=[15008], 80.00th=[18744], 90.00th=[20841], 95.00th=[30278], 00:33:04.051 | 99.00th=[36963], 99.50th=[40109], 99.90th=[45876], 99.95th=[45876], 00:33:04.051 | 99.99th=[45876] 00:33:04.051 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:33:04.051 slat (usec): min=2, max=59084, avg=259.94, stdev=1552.01 00:33:04.051 clat (usec): min=1563, max=108734, avg=35688.02, stdev=21358.87 00:33:04.051 lat (usec): min=1577, max=108746, avg=35947.96, stdev=21460.13 00:33:04.051 clat percentiles (msec): 00:33:04.051 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 13], 00:33:04.051 | 30.00th=[ 23], 40.00th=[ 26], 50.00th=[ 34], 60.00th=[ 42], 00:33:04.051 | 70.00th=[ 50], 80.00th=[ 52], 90.00th=[ 63], 95.00th=[ 74], 00:33:04.051 | 99.00th=[ 99], 99.50th=[ 106], 99.90th=[ 109], 99.95th=[ 109], 00:33:04.051 | 99.99th=[ 109] 00:33:04.051 bw ( KiB/s): min= 8192, max=12288, per=15.13%, avg=10240.00, stdev=2896.31, samples=2 00:33:04.051 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:33:04.051 lat (msec) : 2=0.39%, 4=0.49%, 10=9.30%, 20=44.22%, 50=31.67% 00:33:04.051 lat (msec) : 100=13.49%, 250=0.45% 00:33:04.051 cpu : usr=1.88%, sys=3.77%, ctx=341, majf=0, minf=1 00:33:04.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:33:04.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:04.051 issued rwts: total=2363,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:04.051 00:33:04.051 Run status group 0 (all jobs): 00:33:04.051 READ: bw=60.1MiB/s (63.0MB/s), 9358KiB/s-25.0MiB/s (9583kB/s-26.2MB/s), io=60.7MiB (63.7MB), run=1004-1010msec 00:33:04.051 WRITE: bw=66.1MiB/s (69.3MB/s), 9.90MiB/s-27.9MiB/s (10.4MB/s-29.2MB/s), io=66.7MiB (70.0MB), run=1004-1010msec 00:33:04.051 00:33:04.051 Disk stats (read/write): 00:33:04.051 nvme0n1: ios=5170/5911, merge=0/0, ticks=44053/51108, in_queue=95161, util=86.07% 00:33:04.051 nvme0n2: ios=3436/3584, merge=0/0, ticks=21118/25453, in_queue=46571, util=99.59% 00:33:04.051 nvme0n3: ios=2581/2719, merge=0/0, ticks=37711/65577, in_queue=103288, util=97.39% 00:33:04.051 nvme0n4: ios=2105/2303, merge=0/0, ticks=13494/36422, in_queue=49916, util=97.47% 00:33:04.051 10:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:04.051 [global] 00:33:04.051 thread=1 00:33:04.051 invalidate=1 00:33:04.051 rw=randwrite 00:33:04.051 time_based=1 00:33:04.051 runtime=1 00:33:04.051 ioengine=libaio 00:33:04.051 direct=1 00:33:04.051 bs=4096 00:33:04.051 iodepth=128 00:33:04.051 norandommap=0 00:33:04.051 numjobs=1 00:33:04.051 00:33:04.051 verify_dump=1 00:33:04.051 verify_backlog=512 00:33:04.051 verify_state_save=0 00:33:04.051 do_verify=1 00:33:04.051 verify=crc32c-intel 00:33:04.051 [job0] 00:33:04.051 filename=/dev/nvme0n1 00:33:04.051 [job1] 00:33:04.051 filename=/dev/nvme0n2 00:33:04.051 [job2] 00:33:04.051 filename=/dev/nvme0n3 00:33:04.051 [job3] 00:33:04.051 filename=/dev/nvme0n4 00:33:04.051 Could not set queue depth (nvme0n1) 00:33:04.051 Could not set queue depth (nvme0n2) 00:33:04.051 Could not set queue depth (nvme0n3) 00:33:04.051 Could not set queue depth (nvme0n4) 00:33:04.308 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:04.308 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:04.308 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:04.308 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:04.308 fio-3.35 00:33:04.308 Starting 4 threads 00:33:05.679 00:33:05.679 job0: (groupid=0, jobs=1): err= 0: pid=325451: Wed Dec 11 10:11:15 2024 00:33:05.679 read: IOPS=5047, BW=19.7MiB/s (20.7MB/s)(20.0MiB/1012msec) 00:33:05.679 slat (nsec): min=1187, max=12490k, avg=86792.22, stdev=739416.77 00:33:05.679 clat (usec): min=997, max=34127, avg=12772.31, stdev=4883.52 00:33:05.679 lat (usec): min=1007, max=34137, avg=12859.10, stdev=4937.49 00:33:05.679 clat percentiles (usec): 00:33:05.679 | 1.00th=[ 2409], 5.00th=[ 5211], 10.00th=[ 7177], 20.00th=[ 9896], 00:33:05.679 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12387], 60.00th=[13042], 00:33:05.679 | 70.00th=[13698], 80.00th=[14877], 90.00th=[19006], 95.00th=[23725], 00:33:05.679 | 99.00th=[27657], 99.50th=[30540], 99.90th=[32375], 99.95th=[32375], 00:33:05.679 | 99.99th=[34341] 00:33:05.679 write: IOPS=5059, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1012msec); 0 zone resets 00:33:05.679 slat (nsec): min=1898, max=14031k, avg=82350.33, stdev=636890.02 00:33:05.679 clat (usec): min=625, max=43980, avg=12353.81, stdev=6859.41 00:33:05.679 lat (usec): min=736, max=43985, avg=12436.16, stdev=6906.67 00:33:05.679 clat percentiles (usec): 00:33:05.679 | 1.00th=[ 2606], 5.00th=[ 4228], 10.00th=[ 6587], 20.00th=[ 7832], 00:33:05.679 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[10814], 60.00th=[11469], 00:33:05.679 | 70.00th=[12256], 80.00th=[14091], 90.00th=[21627], 95.00th=[26084], 00:33:05.679 | 99.00th=[40109], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:33:05.679 | 99.99th=[43779] 00:33:05.679 bw ( KiB/s): min=17680, max=23280, per=30.09%, avg=20480.00, stdev=3959.80, samples=2 00:33:05.679 iops : min= 4420, max= 5820, avg=5120.00, stdev=989.95, samples=2 00:33:05.679 lat (usec) : 750=0.01%, 1000=0.01% 00:33:05.679 lat (msec) : 2=0.37%, 4=3.94%, 10=25.19%, 20=59.87%, 50=10.62% 00:33:05.679 cpu : usr=3.36%, sys=5.74%, ctx=361, majf=0, minf=1 00:33:05.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:33:05.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:05.679 issued rwts: total=5108,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:05.679 job1: (groupid=0, jobs=1): err= 0: pid=325467: Wed Dec 11 10:11:15 2024 00:33:05.679 read: IOPS=3705, BW=14.5MiB/s (15.2MB/s)(15.2MiB/1052msec) 00:33:05.679 slat (nsec): min=1172, max=20526k, avg=120235.29, stdev=861296.41 00:33:05.679 clat (usec): min=6694, max=87534, avg=17133.90, stdev=13941.37 00:33:05.679 lat (usec): min=6701, max=87537, avg=17254.13, stdev=14023.01 00:33:05.679 clat percentiles (usec): 00:33:05.679 | 1.00th=[ 6915], 5.00th=[ 7570], 10.00th=[ 8586], 20.00th=[ 9896], 00:33:05.679 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10814], 60.00th=[11994], 00:33:05.679 | 70.00th=[13435], 80.00th=[22676], 90.00th=[36439], 95.00th=[48497], 00:33:05.679 | 99.00th=[74974], 99.50th=[74974], 99.90th=[87557], 99.95th=[87557], 00:33:05.679 | 99.99th=[87557] 00:33:05.679 write: IOPS=3893, BW=15.2MiB/s (15.9MB/s)(16.0MiB/1052msec); 0 zone resets 00:33:05.679 slat (nsec): min=1920, max=21178k, avg=127137.92, stdev=960595.88 00:33:05.679 clat (usec): min=4514, max=62155, avg=16196.75, stdev=11951.42 00:33:05.679 lat (usec): min=4522, max=62165, avg=16323.88, stdev=12046.42 00:33:05.679 clat percentiles (usec): 00:33:05.679 | 1.00th=[ 5080], 5.00th=[ 7046], 10.00th=[ 9241], 20.00th=[ 9896], 00:33:05.679 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[11469], 00:33:05.679 | 70.00th=[12518], 80.00th=[20841], 90.00th=[37487], 95.00th=[46400], 00:33:05.679 | 99.00th=[52167], 99.50th=[52167], 99.90th=[57934], 99.95th=[58983], 00:33:05.679 | 99.99th=[62129] 00:33:05.679 bw ( KiB/s): min= 9304, max=23464, per=24.07%, avg=16384.00, stdev=10012.63, samples=2 00:33:05.679 iops : min= 2326, max= 5866, avg=4096.00, stdev=2503.16, samples=2 00:33:05.679 lat (msec) : 10=22.33%, 20=56.97%, 50=17.34%, 100=3.37% 00:33:05.679 cpu : usr=2.57%, sys=3.33%, ctx=418, majf=0, minf=1 00:33:05.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:05.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:05.679 issued rwts: total=3898,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:05.679 job2: (groupid=0, jobs=1): err= 0: pid=325485: Wed Dec 11 10:11:15 2024 00:33:05.679 read: IOPS=5170, BW=20.2MiB/s (21.2MB/s)(20.4MiB/1010msec) 00:33:05.679 slat (nsec): min=1394, max=15540k, avg=95873.36, stdev=801806.80 00:33:05.679 clat (usec): min=4287, max=42050, avg=12543.83, stdev=5207.16 00:33:05.679 lat (usec): min=4293, max=42056, avg=12639.70, stdev=5270.26 00:33:05.679 clat percentiles (usec): 00:33:05.679 | 1.00th=[ 6718], 5.00th=[ 7767], 10.00th=[ 8160], 20.00th=[ 9110], 00:33:05.679 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10945], 60.00th=[11600], 00:33:05.679 | 70.00th=[13042], 80.00th=[14746], 90.00th=[19006], 95.00th=[21627], 00:33:05.679 | 99.00th=[35390], 99.50th=[38536], 99.90th=[41157], 99.95th=[42206], 00:33:05.679 | 99.99th=[42206] 00:33:05.679 write: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec); 0 zone resets 00:33:05.679 slat (usec): min=2, max=15422, avg=82.75, stdev=633.95 00:33:05.679 clat (usec): min=1410, max=42035, avg=11110.22, stdev=3415.01 00:33:05.679 lat (usec): min=1422, max=42039, avg=11192.97, stdev=3452.11 00:33:05.679 clat percentiles (usec): 00:33:05.679 | 1.00th=[ 4686], 5.00th=[ 6783], 10.00th=[ 7439], 20.00th=[ 8717], 00:33:05.679 | 30.00th=[ 9503], 40.00th=[10552], 50.00th=[10945], 60.00th=[11338], 00:33:05.679 | 70.00th=[11731], 80.00th=[12649], 90.00th=[15139], 95.00th=[15795], 00:33:05.679 | 99.00th=[24249], 99.50th=[26346], 99.90th=[28967], 99.95th=[42206], 00:33:05.679 | 99.99th=[42206] 00:33:05.679 bw ( KiB/s): min=20480, max=24376, per=32.95%, avg=22428.00, stdev=2754.89, samples=2 00:33:05.679 iops : min= 5120, max= 6094, avg=5607.00, stdev=688.72, samples=2 00:33:05.679 lat (msec) : 2=0.07%, 4=0.25%, 10=31.84%, 20=62.54%, 50=5.30% 00:33:05.679 cpu : usr=4.56%, sys=6.64%, ctx=407, majf=0, minf=1 00:33:05.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:33:05.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:05.679 issued rwts: total=5222,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:05.679 job3: (groupid=0, jobs=1): err= 0: pid=325491: Wed Dec 11 10:11:15 2024 00:33:05.679 read: IOPS=2513, BW=9.82MiB/s (10.3MB/s)(10.3MiB/1053msec) 00:33:05.679 slat (nsec): min=1393, max=23400k, avg=176417.83, stdev=1334686.65 00:33:05.679 clat (usec): min=4710, max=65874, avg=22454.05, stdev=12994.44 00:33:05.679 lat (usec): min=4717, max=65883, avg=22630.47, stdev=13083.03 00:33:05.679 clat percentiles (usec): 00:33:05.679 | 1.00th=[ 4752], 5.00th=[ 8848], 10.00th=[10683], 20.00th=[11600], 00:33:05.679 | 30.00th=[13829], 40.00th=[14746], 50.00th=[17695], 60.00th=[22414], 00:33:05.679 | 70.00th=[26870], 80.00th=[32375], 90.00th=[40109], 95.00th=[47973], 00:33:05.679 | 99.00th=[65799], 99.50th=[65799], 99.90th=[65799], 99.95th=[65799], 00:33:05.679 | 99.99th=[65799] 00:33:05.679 write: IOPS=2917, BW=11.4MiB/s (11.9MB/s)(12.0MiB/1053msec); 0 zone resets 00:33:05.679 slat (usec): min=2, max=21331, avg=163.63, stdev=1171.20 00:33:05.679 clat (usec): min=881, max=80003, avg=23513.90, stdev=13766.83 00:33:05.679 lat (usec): min=910, max=80012, avg=23677.53, stdev=13864.52 00:33:05.679 clat percentiles (usec): 00:33:05.679 | 1.00th=[ 2999], 5.00th=[ 8717], 10.00th=[11863], 20.00th=[14091], 00:33:05.680 | 30.00th=[15664], 40.00th=[17433], 50.00th=[19268], 60.00th=[21627], 00:33:05.680 | 70.00th=[27657], 80.00th=[30278], 90.00th=[41681], 95.00th=[55313], 00:33:05.680 | 99.00th=[68682], 99.50th=[80217], 99.90th=[80217], 99.95th=[80217], 00:33:05.680 | 99.99th=[80217] 00:33:05.680 bw ( KiB/s): min=11960, max=12288, per=17.81%, avg=12124.00, stdev=231.93, samples=2 00:33:05.680 iops : min= 2990, max= 3072, avg=3031.00, stdev=57.98, samples=2 00:33:05.680 lat (usec) : 1000=0.10% 00:33:05.680 lat (msec) : 2=0.28%, 4=0.42%, 10=6.29%, 20=45.60%, 50=41.79% 00:33:05.680 lat (msec) : 100=5.51% 00:33:05.680 cpu : usr=1.52%, sys=4.18%, ctx=212, majf=0, minf=1 00:33:05.680 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:33:05.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:05.680 issued rwts: total=2647,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.680 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:05.680 00:33:05.680 Run status group 0 (all jobs): 00:33:05.680 READ: bw=62.6MiB/s (65.6MB/s), 9.82MiB/s-20.2MiB/s (10.3MB/s-21.2MB/s), io=65.9MiB (69.1MB), run=1010-1053msec 00:33:05.680 WRITE: bw=66.5MiB/s (69.7MB/s), 11.4MiB/s-21.8MiB/s (11.9MB/s-22.8MB/s), io=70.0MiB (73.4MB), run=1010-1053msec 00:33:05.680 00:33:05.680 Disk stats (read/write): 00:33:05.680 nvme0n1: ios=4111/4128, merge=0/0, ticks=46415/43582, in_queue=89997, util=88.88% 00:33:05.680 nvme0n2: ios=3634/3897, merge=0/0, ticks=17046/19390, in_queue=36436, util=89.82% 00:33:05.680 nvme0n3: ios=4153/4602, merge=0/0, ticks=52952/50497, in_queue=103449, util=93.07% 00:33:05.680 nvme0n4: ios=2412/2560, merge=0/0, ticks=27055/24647, in_queue=51702, util=92.79% 00:33:05.680 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:33:05.680 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=325596 00:33:05.680 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:33:05.680 10:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:33:05.680 [global] 00:33:05.680 thread=1 00:33:05.680 invalidate=1 00:33:05.680 rw=read 00:33:05.680 time_based=1 00:33:05.680 runtime=10 00:33:05.680 ioengine=libaio 00:33:05.680 direct=1 00:33:05.680 bs=4096 00:33:05.680 iodepth=1 00:33:05.680 norandommap=1 00:33:05.680 numjobs=1 00:33:05.680 00:33:05.680 [job0] 00:33:05.680 filename=/dev/nvme0n1 00:33:05.680 [job1] 00:33:05.680 filename=/dev/nvme0n2 00:33:05.680 [job2] 00:33:05.680 filename=/dev/nvme0n3 00:33:05.680 [job3] 00:33:05.680 filename=/dev/nvme0n4 00:33:05.680 Could not set queue depth (nvme0n1) 00:33:05.680 Could not set queue depth (nvme0n2) 00:33:05.680 Could not set queue depth (nvme0n3) 00:33:05.680 Could not set queue depth (nvme0n4) 00:33:05.954 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:05.954 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:05.954 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:05.954 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:05.954 fio-3.35 00:33:05.954 Starting 4 threads 00:33:09.227 10:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:33:09.227 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=4460544, buflen=4096 00:33:09.227 fio: pid=325917, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:09.227 10:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:33:09.227 10:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:09.227 10:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:33:09.227 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=552960, buflen=4096 00:33:09.227 fio: pid=325916, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:09.227 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=5197824, buflen=4096 00:33:09.227 fio: pid=325908, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:09.227 10:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:09.227 10:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:33:09.484 10:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:09.484 10:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:33:09.484 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=4698112, buflen=4096 00:33:09.484 fio: pid=325915, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:09.484 00:33:09.484 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=325908: Wed Dec 11 10:11:19 2024 00:33:09.485 read: IOPS=406, BW=1623KiB/s (1662kB/s)(5076KiB/3127msec) 00:33:09.485 slat (usec): min=3, max=31107, avg=46.91, stdev=939.40 00:33:09.485 clat (usec): min=174, max=43886, avg=2394.95, stdev=9195.63 00:33:09.485 lat (usec): min=178, max=43916, avg=2441.90, stdev=9238.89 00:33:09.485 clat percentiles (usec): 00:33:09.485 | 1.00th=[ 178], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 184], 00:33:09.485 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:33:09.485 | 70.00th=[ 212], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[40633], 00:33:09.485 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[43779], 00:33:09.485 | 99.99th=[43779] 00:33:09.485 bw ( KiB/s): min= 96, max= 7883, per=32.21%, avg=1395.17, stdev=3178.38, samples=6 00:33:09.485 iops : min= 24, max= 1970, avg=348.67, stdev=794.29, samples=6 00:33:09.485 lat (usec) : 250=84.02%, 500=10.39%, 750=0.16% 00:33:09.485 lat (msec) : 50=5.35% 00:33:09.485 cpu : usr=0.13%, sys=0.32%, ctx=1276, majf=0, minf=1 00:33:09.485 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:09.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.485 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.485 issued rwts: total=1270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.485 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:09.485 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=325915: Wed Dec 11 10:11:19 2024 00:33:09.485 read: IOPS=341, BW=1365KiB/s (1397kB/s)(4588KiB/3362msec) 00:33:09.485 slat (usec): min=6, max=15753, avg=37.03, stdev=590.60 00:33:09.485 clat (usec): min=176, max=42049, avg=2873.33, stdev=10021.93 00:33:09.485 lat (usec): min=183, max=47053, avg=2896.66, stdev=10049.25 00:33:09.485 clat percentiles (usec): 00:33:09.485 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 243], 00:33:09.485 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 247], 60.00th=[ 249], 00:33:09.485 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[41157], 00:33:09.485 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:33:09.485 | 99.99th=[42206] 00:33:09.485 bw ( KiB/s): min= 96, max= 5976, per=33.57%, avg=1454.00, stdev=2390.86, samples=6 00:33:09.485 iops : min= 24, max= 1494, avg=363.50, stdev=597.71, samples=6 00:33:09.485 lat (usec) : 250=66.46%, 500=26.92%, 750=0.09% 00:33:09.485 lat (msec) : 50=6.45% 00:33:09.485 cpu : usr=0.06%, sys=0.60%, ctx=1151, majf=0, minf=2 00:33:09.485 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:09.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.485 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.485 issued rwts: total=1148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.485 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:09.485 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=325916: Wed Dec 11 10:11:19 2024 00:33:09.485 read: IOPS=46, BW=184KiB/s (189kB/s)(540KiB/2929msec) 00:33:09.485 slat (usec): min=8, max=12742, avg=110.42, stdev=1091.25 00:33:09.485 clat (usec): min=222, max=41521, avg=21424.01, stdev=20449.33 00:33:09.485 lat (usec): min=232, max=53918, avg=21535.07, stdev=20568.85 00:33:09.485 clat percentiles (usec): 00:33:09.485 | 1.00th=[ 225], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 253], 00:33:09.485 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[40633], 60.00th=[40633], 00:33:09.485 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:33:09.485 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:33:09.485 | 99.99th=[41681] 00:33:09.485 bw ( KiB/s): min= 104, max= 264, per=4.34%, avg=188.80, stdev=69.19, samples=5 00:33:09.485 iops : min= 26, max= 66, avg=47.20, stdev=17.30, samples=5 00:33:09.485 lat (usec) : 250=18.38%, 500=27.94%, 1000=0.74% 00:33:09.485 lat (msec) : 2=0.74%, 50=51.47% 00:33:09.485 cpu : usr=0.17%, sys=0.00%, ctx=137, majf=0, minf=2 00:33:09.485 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:09.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.485 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.485 issued rwts: total=136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.485 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:09.485 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=325917: Wed Dec 11 10:11:19 2024 00:33:09.485 read: IOPS=402, BW=1607KiB/s (1645kB/s)(4356KiB/2711msec) 00:33:09.485 slat (nsec): min=7033, max=44969, avg=9018.53, stdev=3691.79 00:33:09.485 clat (usec): min=197, max=41982, avg=2458.05, stdev=9236.50 00:33:09.485 lat (usec): min=206, max=42005, avg=2467.06, stdev=9239.66 00:33:09.485 clat percentiles (usec): 00:33:09.485 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 245], 00:33:09.485 | 30.00th=[ 247], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 249], 00:33:09.485 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 258], 95.00th=[40633], 00:33:09.485 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:33:09.485 | 99.99th=[42206] 00:33:09.485 bw ( KiB/s): min= 96, max= 6768, per=40.04%, avg=1734.40, stdev=2888.77, samples=5 00:33:09.485 iops : min= 24, max= 1692, avg=433.60, stdev=722.19, samples=5 00:33:09.485 lat (usec) : 250=62.20%, 500=32.29% 00:33:09.485 lat (msec) : 50=5.41% 00:33:09.485 cpu : usr=0.15%, sys=0.74%, ctx=1090, majf=0, minf=2 00:33:09.485 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:09.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.485 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:09.485 issued rwts: total=1090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:09.485 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:09.485 00:33:09.485 Run status group 0 (all jobs): 00:33:09.485 READ: bw=4331KiB/s (4435kB/s), 184KiB/s-1623KiB/s (189kB/s-1662kB/s), io=14.2MiB (14.9MB), run=2711-3362msec 00:33:09.485 00:33:09.485 Disk stats (read/write): 00:33:09.485 nvme0n1: ios=1215/0, merge=0/0, ticks=3020/0, in_queue=3020, util=93.77% 00:33:09.485 nvme0n2: ios=1147/0, merge=0/0, ticks=3288/0, in_queue=3288, util=95.36% 00:33:09.485 nvme0n3: ios=156/0, merge=0/0, ticks=2976/0, in_queue=2976, util=96.88% 00:33:09.485 nvme0n4: ios=1086/0, merge=0/0, ticks=2558/0, in_queue=2558, util=96.46% 00:33:09.742 10:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:09.742 10:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:33:09.999 10:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:09.999 10:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:33:09.999 10:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:09.999 10:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:33:10.256 10:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:10.256 10:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:33:10.513 10:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:33:10.513 10:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 325596 00:33:10.513 10:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:33:10.513 10:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:10.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:10.513 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:10.513 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:33:10.513 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:10.513 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:10.770 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:10.771 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:10.771 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:33:10.771 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:33:10.771 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:33:10.771 nvmf hotplug test: fio failed as expected 00:33:10.771 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:10.771 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:33:10.771 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:33:10.771 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:33:10.771 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:33:10.771 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:33:10.771 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:10.771 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:33:10.771 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:10.771 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:33:10.771 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:10.771 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:10.771 rmmod nvme_tcp 00:33:10.771 rmmod nvme_fabrics 00:33:11.029 rmmod nvme_keyring 00:33:11.029 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:11.029 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:33:11.029 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:33:11.029 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 323010 ']' 00:33:11.029 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 323010 00:33:11.029 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 323010 ']' 00:33:11.029 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 323010 00:33:11.029 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:33:11.029 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:11.029 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 323010 00:33:11.029 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:11.029 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:11.029 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 323010' 00:33:11.029 killing process with pid 323010 00:33:11.029 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 323010 00:33:11.029 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 323010 00:33:11.288 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:11.288 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:11.288 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:11.288 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:33:11.288 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:33:11.288 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:11.288 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:11.288 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:11.288 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:11.288 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.288 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:11.288 10:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:13.192 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:13.192 00:33:13.192 real 0m27.400s 00:33:13.192 user 1m32.468s 00:33:13.192 sys 0m11.348s 00:33:13.192 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:13.192 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:13.192 ************************************ 00:33:13.192 END TEST nvmf_fio_target 00:33:13.192 ************************************ 00:33:13.192 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:13.192 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:13.192 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:13.192 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:13.192 ************************************ 00:33:13.192 START TEST nvmf_bdevio 00:33:13.192 ************************************ 00:33:13.192 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:13.452 * Looking for test storage... 00:33:13.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:13.452 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:13.452 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:33:13.452 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:13.452 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:13.452 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:13.452 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:13.452 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:13.452 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:33:13.452 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:33:13.452 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:33:13.452 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:33:13.452 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:33:13.452 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:13.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:13.453 --rc genhtml_branch_coverage=1 00:33:13.453 --rc genhtml_function_coverage=1 00:33:13.453 --rc genhtml_legend=1 00:33:13.453 --rc geninfo_all_blocks=1 00:33:13.453 --rc geninfo_unexecuted_blocks=1 00:33:13.453 00:33:13.453 ' 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:13.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:13.453 --rc genhtml_branch_coverage=1 00:33:13.453 --rc genhtml_function_coverage=1 00:33:13.453 --rc genhtml_legend=1 00:33:13.453 --rc geninfo_all_blocks=1 00:33:13.453 --rc geninfo_unexecuted_blocks=1 00:33:13.453 00:33:13.453 ' 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:13.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:13.453 --rc genhtml_branch_coverage=1 00:33:13.453 --rc genhtml_function_coverage=1 00:33:13.453 --rc genhtml_legend=1 00:33:13.453 --rc geninfo_all_blocks=1 00:33:13.453 --rc geninfo_unexecuted_blocks=1 00:33:13.453 00:33:13.453 ' 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:13.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:13.453 --rc genhtml_branch_coverage=1 00:33:13.453 --rc genhtml_function_coverage=1 00:33:13.453 --rc genhtml_legend=1 00:33:13.453 --rc geninfo_all_blocks=1 00:33:13.453 --rc geninfo_unexecuted_blocks=1 00:33:13.453 00:33:13.453 ' 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:13.453 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:13.454 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:13.454 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:13.454 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:13.454 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:33:13.454 10:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:20.021 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:20.021 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:20.021 Found net devices under 0000:af:00.0: cvl_0_0 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:20.021 Found net devices under 0000:af:00.1: cvl_0_1 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:20.021 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:20.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:20.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:33:20.022 00:33:20.022 --- 10.0.0.2 ping statistics --- 00:33:20.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.022 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:33:20.022 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:20.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:20.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:33:20.281 00:33:20.281 --- 10.0.0.1 ping statistics --- 00:33:20.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.281 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=330433 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 330433 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 330433 ']' 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:20.281 10:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:20.281 [2024-12-11 10:11:29.691469] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:20.281 [2024-12-11 10:11:29.692356] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:33:20.281 [2024-12-11 10:11:29.692391] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:20.281 [2024-12-11 10:11:29.777956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:20.281 [2024-12-11 10:11:29.818887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:20.281 [2024-12-11 10:11:29.818925] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:20.281 [2024-12-11 10:11:29.818932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:20.281 [2024-12-11 10:11:29.818938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:20.281 [2024-12-11 10:11:29.818945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:20.281 [2024-12-11 10:11:29.820551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:33:20.281 [2024-12-11 10:11:29.820667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:33:20.281 [2024-12-11 10:11:29.820755] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:33:20.281 [2024-12-11 10:11:29.820757] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:33:20.540 [2024-12-11 10:11:29.889187] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:20.540 [2024-12-11 10:11:29.889997] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:20.540 [2024-12-11 10:11:29.890294] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:20.540 [2024-12-11 10:11:29.890881] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:20.540 [2024-12-11 10:11:29.890916] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:21.108 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:21.109 [2024-12-11 10:11:30.589634] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:21.109 Malloc0 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:21.109 [2024-12-11 10:11:30.677735] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:21.109 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.366 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:33:21.366 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:33:21.366 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:33:21.366 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:33:21.366 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:21.366 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:21.366 { 00:33:21.366 "params": { 00:33:21.366 "name": "Nvme$subsystem", 00:33:21.366 "trtype": "$TEST_TRANSPORT", 00:33:21.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:21.366 "adrfam": "ipv4", 00:33:21.366 "trsvcid": "$NVMF_PORT", 00:33:21.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:21.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:21.366 "hdgst": ${hdgst:-false}, 00:33:21.366 "ddgst": ${ddgst:-false} 00:33:21.366 }, 00:33:21.366 "method": "bdev_nvme_attach_controller" 00:33:21.366 } 00:33:21.366 EOF 00:33:21.366 )") 00:33:21.366 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:33:21.366 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:33:21.366 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:33:21.366 10:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:21.366 "params": { 00:33:21.366 "name": "Nvme1", 00:33:21.366 "trtype": "tcp", 00:33:21.366 "traddr": "10.0.0.2", 00:33:21.366 "adrfam": "ipv4", 00:33:21.366 "trsvcid": "4420", 00:33:21.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:21.366 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:21.366 "hdgst": false, 00:33:21.366 "ddgst": false 00:33:21.366 }, 00:33:21.366 "method": "bdev_nvme_attach_controller" 00:33:21.366 }' 00:33:21.366 [2024-12-11 10:11:30.728192] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:33:21.366 [2024-12-11 10:11:30.728243] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330658 ] 00:33:21.366 [2024-12-11 10:11:30.810499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:21.366 [2024-12-11 10:11:30.852531] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.367 [2024-12-11 10:11:30.852640] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.367 [2024-12-11 10:11:30.852641] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:21.624 I/O targets: 00:33:21.624 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:33:21.624 00:33:21.624 00:33:21.624 CUnit - A unit testing framework for C - Version 2.1-3 00:33:21.624 http://cunit.sourceforge.net/ 00:33:21.624 00:33:21.624 00:33:21.624 Suite: bdevio tests on: Nvme1n1 00:33:21.624 Test: blockdev write read block ...passed 00:33:21.881 Test: blockdev write zeroes read block ...passed 00:33:21.881 Test: blockdev write zeroes read no split ...passed 00:33:21.881 Test: blockdev write zeroes read split ...passed 00:33:21.881 Test: blockdev write zeroes read split partial ...passed 00:33:21.881 Test: blockdev reset ...[2024-12-11 10:11:31.273699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:21.881 [2024-12-11 10:11:31.273759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11169d0 (9): Bad file descriptor 00:33:21.881 [2024-12-11 10:11:31.318014] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:33:21.881 passed 00:33:21.881 Test: blockdev write read 8 blocks ...passed 00:33:21.881 Test: blockdev write read size > 128k ...passed 00:33:21.881 Test: blockdev write read invalid size ...passed 00:33:21.881 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:21.881 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:21.881 Test: blockdev write read max offset ...passed 00:33:21.881 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:21.881 Test: blockdev writev readv 8 blocks ...passed 00:33:21.881 Test: blockdev writev readv 30 x 1block ...passed 00:33:22.138 Test: blockdev writev readv block ...passed 00:33:22.138 Test: blockdev writev readv size > 128k ...passed 00:33:22.138 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:22.138 Test: blockdev comparev and writev ...[2024-12-11 10:11:31.487137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:22.138 [2024-12-11 10:11:31.487164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:22.138 [2024-12-11 10:11:31.487178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:22.138 [2024-12-11 10:11:31.487186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:22.138 [2024-12-11 10:11:31.487470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:22.138 [2024-12-11 10:11:31.487480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:22.138 [2024-12-11 10:11:31.487492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:22.138 [2024-12-11 10:11:31.487498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:22.138 [2024-12-11 10:11:31.487782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:22.138 [2024-12-11 10:11:31.487793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:22.138 [2024-12-11 10:11:31.487804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:22.138 [2024-12-11 10:11:31.487811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:22.139 [2024-12-11 10:11:31.488096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:22.139 [2024-12-11 10:11:31.488107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:22.139 [2024-12-11 10:11:31.488117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:22.139 [2024-12-11 10:11:31.488125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:22.139 passed 00:33:22.139 Test: blockdev nvme passthru rw ...passed 00:33:22.139 Test: blockdev nvme passthru vendor specific ...[2024-12-11 10:11:31.570616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:22.139 [2024-12-11 10:11:31.570631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:22.139 [2024-12-11 10:11:31.570741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:22.139 [2024-12-11 10:11:31.570751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:22.139 [2024-12-11 10:11:31.570857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:22.139 [2024-12-11 10:11:31.570866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:22.139 [2024-12-11 10:11:31.570970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:22.139 [2024-12-11 10:11:31.570979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:22.139 passed 00:33:22.139 Test: blockdev nvme admin passthru ...passed 00:33:22.139 Test: blockdev copy ...passed 00:33:22.139 00:33:22.139 Run Summary: Type Total Ran Passed Failed Inactive 00:33:22.139 suites 1 1 n/a 0 0 00:33:22.139 tests 23 23 23 0 0 00:33:22.139 asserts 152 152 152 0 n/a 00:33:22.139 00:33:22.139 Elapsed time = 1.009 seconds 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:22.397 rmmod nvme_tcp 00:33:22.397 rmmod nvme_fabrics 00:33:22.397 rmmod nvme_keyring 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 330433 ']' 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 330433 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 330433 ']' 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 330433 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 330433 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 330433' 00:33:22.397 killing process with pid 330433 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 330433 00:33:22.397 10:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 330433 00:33:22.656 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:22.656 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:22.657 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:22.657 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:33:22.657 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:33:22.657 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:22.657 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:33:22.657 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:22.657 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:22.657 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.657 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.657 10:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.193 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:25.193 00:33:25.193 real 0m11.399s 00:33:25.193 user 0m9.231s 00:33:25.193 sys 0m5.908s 00:33:25.193 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:25.193 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:25.193 ************************************ 00:33:25.193 END TEST nvmf_bdevio 00:33:25.193 ************************************ 00:33:25.193 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:25.193 00:33:25.193 real 4m47.067s 00:33:25.193 user 9m12.478s 00:33:25.193 sys 1m58.151s 00:33:25.193 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:25.193 10:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:25.193 ************************************ 00:33:25.193 END TEST nvmf_target_core_interrupt_mode 00:33:25.193 ************************************ 00:33:25.193 10:11:34 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:25.193 10:11:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:25.193 10:11:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:25.193 10:11:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:25.193 ************************************ 00:33:25.193 START TEST nvmf_interrupt 00:33:25.193 ************************************ 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:25.193 * Looking for test storage... 00:33:25.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:25.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.193 --rc genhtml_branch_coverage=1 00:33:25.193 --rc genhtml_function_coverage=1 00:33:25.193 --rc genhtml_legend=1 00:33:25.193 --rc geninfo_all_blocks=1 00:33:25.193 --rc geninfo_unexecuted_blocks=1 00:33:25.193 00:33:25.193 ' 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:25.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.193 --rc genhtml_branch_coverage=1 00:33:25.193 --rc genhtml_function_coverage=1 00:33:25.193 --rc genhtml_legend=1 00:33:25.193 --rc geninfo_all_blocks=1 00:33:25.193 --rc geninfo_unexecuted_blocks=1 00:33:25.193 00:33:25.193 ' 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:25.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.193 --rc genhtml_branch_coverage=1 00:33:25.193 --rc genhtml_function_coverage=1 00:33:25.193 --rc genhtml_legend=1 00:33:25.193 --rc geninfo_all_blocks=1 00:33:25.193 --rc geninfo_unexecuted_blocks=1 00:33:25.193 00:33:25.193 ' 00:33:25.193 10:11:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:25.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.193 --rc genhtml_branch_coverage=1 00:33:25.193 --rc genhtml_function_coverage=1 00:33:25.193 --rc genhtml_legend=1 00:33:25.193 --rc geninfo_all_blocks=1 00:33:25.193 --rc geninfo_unexecuted_blocks=1 00:33:25.194 00:33:25.194 ' 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:33:25.194 10:11:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:31.763 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:31.763 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:31.763 Found net devices under 0000:af:00.0: cvl_0_0 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.763 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:31.764 Found net devices under 0000:af:00.1: cvl_0_1 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:31.764 10:11:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:31.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:31.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:33:31.764 00:33:31.764 --- 10.0.0.2 ping statistics --- 00:33:31.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.764 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:31.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:31.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:33:31.764 00:33:31.764 --- 10.0.0.1 ping statistics --- 00:33:31.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.764 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=334678 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 334678 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 334678 ']' 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:31.764 10:11:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:31.764 [2024-12-11 10:11:41.249197] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:31.764 [2024-12-11 10:11:41.250116] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:33:31.764 [2024-12-11 10:11:41.250150] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.764 [2024-12-11 10:11:41.334024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:32.023 [2024-12-11 10:11:41.373685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:32.023 [2024-12-11 10:11:41.373721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:32.023 [2024-12-11 10:11:41.373728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:32.023 [2024-12-11 10:11:41.373734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:32.023 [2024-12-11 10:11:41.373739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:32.023 [2024-12-11 10:11:41.374935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.023 [2024-12-11 10:11:41.374936] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.023 [2024-12-11 10:11:41.443535] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:32.023 [2024-12-11 10:11:41.444134] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:32.023 [2024-12-11 10:11:41.444308] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:32.592 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:32.592 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:33:32.592 10:11:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:32.592 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:32.592 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:32.592 10:11:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:32.592 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:32.592 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:32.592 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:32.592 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:32.592 5000+0 records in 00:33:32.592 5000+0 records out 00:33:32.592 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0179984 s, 569 MB/s 00:33:32.592 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:32.592 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.592 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:32.852 AIO0 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:32.852 [2024-12-11 10:11:42.203746] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:32.852 [2024-12-11 10:11:42.244039] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 334678 0 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 334678 0 idle 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=334678 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 334678 -w 256 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:32.852 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 334678 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.26 reactor_0' 00:33:33.111 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 334678 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.26 reactor_0 00:33:33.111 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:33.111 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:33.111 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:33.111 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:33.111 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 334678 1 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 334678 1 idle 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=334678 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 334678 -w 256 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 334695 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 334695 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=334939 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 334678 0 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 334678 0 busy 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=334678 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 334678 -w 256 00:33:33.112 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 334678 root 20 0 128.2g 47616 34560 R 73.3 0.0 0:00.37 reactor_0' 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 334678 root 20 0 128.2g 47616 34560 R 73.3 0.0 0:00.37 reactor_0 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=73.3 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=73 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 334678 1 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 334678 1 busy 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=334678 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 334678 -w 256 00:33:33.370 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:33.628 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 334695 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.24 reactor_1' 00:33:33.628 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 334695 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.24 reactor_1 00:33:33.628 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:33.628 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:33.628 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:33.628 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:33.628 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:33.628 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:33.628 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:33.628 10:11:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:33.628 10:11:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 334939 00:33:43.593 Initializing NVMe Controllers 00:33:43.593 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:43.593 Controller IO queue size 256, less than required. 00:33:43.593 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:43.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:43.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:43.593 Initialization complete. Launching workers. 00:33:43.593 ======================================================== 00:33:43.593 Latency(us) 00:33:43.593 Device Information : IOPS MiB/s Average min max 00:33:43.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16368.09 63.94 15648.30 2893.66 31637.02 00:33:43.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16652.99 65.05 15377.29 7583.16 27113.73 00:33:43.593 ======================================================== 00:33:43.593 Total : 33021.09 128.99 15511.62 2893.66 31637.02 00:33:43.593 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 334678 0 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 334678 0 idle 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=334678 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 334678 -w 256 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 334678 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.25 reactor_0' 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 334678 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.25 reactor_0 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 334678 1 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 334678 1 idle 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=334678 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 334678 -w 256 00:33:43.593 10:11:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:43.593 10:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 334695 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:33:43.593 10:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 334695 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:33:43.593 10:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:43.593 10:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:43.593 10:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:43.593 10:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:43.593 10:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:43.593 10:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:43.593 10:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:43.593 10:11:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:43.593 10:11:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:44.162 10:11:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:33:44.162 10:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:33:44.162 10:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:44.162 10:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:44.162 10:11:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 334678 0 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 334678 0 idle 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=334678 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 334678 -w 256 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 334678 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:20.52 reactor_0' 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 334678 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:20.52 reactor_0 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 334678 1 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 334678 1 idle 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=334678 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 334678 -w 256 00:33:46.233 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:46.492 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 334695 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.12 reactor_1' 00:33:46.492 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 334695 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.12 reactor_1 00:33:46.492 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:46.492 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:46.492 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:46.492 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:46.492 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:46.492 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:46.492 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:46.492 10:11:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:46.492 10:11:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:46.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:46.751 10:11:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:46.752 rmmod nvme_tcp 00:33:46.752 rmmod nvme_fabrics 00:33:46.752 rmmod nvme_keyring 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 334678 ']' 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 334678 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 334678 ']' 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 334678 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 334678 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 334678' 00:33:46.752 killing process with pid 334678 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 334678 00:33:46.752 10:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 334678 00:33:47.011 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:47.011 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:47.011 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:47.011 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:47.011 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:33:47.011 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:47.011 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:33:47.011 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:47.011 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:47.011 10:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.011 10:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:47.011 10:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.547 10:11:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:49.547 00:33:49.547 real 0m24.276s 00:33:49.547 user 0m40.053s 00:33:49.547 sys 0m9.108s 00:33:49.547 10:11:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:49.547 10:11:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:49.547 ************************************ 00:33:49.547 END TEST nvmf_interrupt 00:33:49.547 ************************************ 00:33:49.547 00:33:49.547 real 28m36.318s 00:33:49.547 user 57m17.047s 00:33:49.547 sys 9m56.346s 00:33:49.547 10:11:58 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:49.547 10:11:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.547 ************************************ 00:33:49.547 END TEST nvmf_tcp 00:33:49.547 ************************************ 00:33:49.547 10:11:58 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:33:49.547 10:11:58 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:49.547 10:11:58 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:49.547 10:11:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:49.547 10:11:58 -- common/autotest_common.sh@10 -- # set +x 00:33:49.547 ************************************ 00:33:49.547 START TEST spdkcli_nvmf_tcp 00:33:49.547 ************************************ 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:49.547 * Looking for test storage... 00:33:49.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:49.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.547 --rc genhtml_branch_coverage=1 00:33:49.547 --rc genhtml_function_coverage=1 00:33:49.547 --rc genhtml_legend=1 00:33:49.547 --rc geninfo_all_blocks=1 00:33:49.547 --rc geninfo_unexecuted_blocks=1 00:33:49.547 00:33:49.547 ' 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:49.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.547 --rc genhtml_branch_coverage=1 00:33:49.547 --rc genhtml_function_coverage=1 00:33:49.547 --rc genhtml_legend=1 00:33:49.547 --rc geninfo_all_blocks=1 00:33:49.547 --rc geninfo_unexecuted_blocks=1 00:33:49.547 00:33:49.547 ' 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:49.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.547 --rc genhtml_branch_coverage=1 00:33:49.547 --rc genhtml_function_coverage=1 00:33:49.547 --rc genhtml_legend=1 00:33:49.547 --rc geninfo_all_blocks=1 00:33:49.547 --rc geninfo_unexecuted_blocks=1 00:33:49.547 00:33:49.547 ' 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:49.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.547 --rc genhtml_branch_coverage=1 00:33:49.547 --rc genhtml_function_coverage=1 00:33:49.547 --rc genhtml_legend=1 00:33:49.547 --rc geninfo_all_blocks=1 00:33:49.547 --rc geninfo_unexecuted_blocks=1 00:33:49.547 00:33:49.547 ' 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:49.547 10:11:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:49.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=337711 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 337711 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 337711 ']' 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:49.548 10:11:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.548 [2024-12-11 10:11:58.913718] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:33:49.548 [2024-12-11 10:11:58.913769] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337711 ] 00:33:49.548 [2024-12-11 10:11:58.995227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:49.548 [2024-12-11 10:11:59.036716] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:49.548 [2024-12-11 10:11:59.036719] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.807 10:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:49.807 10:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:33:49.807 10:11:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:49.807 10:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:49.807 10:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.807 10:11:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:49.807 10:11:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:49.807 10:11:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:49.807 10:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:49.807 10:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.807 10:11:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:49.807 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:49.807 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:49.807 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:49.807 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:49.807 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:49.807 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:49.807 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:49.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:49.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:49.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:49.807 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:49.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:49.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:49.807 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:49.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:49.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:49.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:49.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:49.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:49.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:49.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:49.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:49.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:49.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:49.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:49.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:49.807 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:49.807 ' 00:33:52.337 [2024-12-11 10:12:01.868122] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.713 [2024-12-11 10:12:03.216635] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:56.250 [2024-12-11 10:12:05.708294] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:58.781 [2024-12-11 10:12:07.867054] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:00.157 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:00.157 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:00.157 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:00.157 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:00.157 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:00.157 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:00.157 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:00.157 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:00.157 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:00.157 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:00.157 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:00.157 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:00.157 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:00.157 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:00.157 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:00.157 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:00.157 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:00.157 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:00.157 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:00.157 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:00.157 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:00.157 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:00.157 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:00.157 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:00.157 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:00.157 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:00.157 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:00.157 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:00.157 10:12:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:00.157 10:12:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:00.157 10:12:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:00.157 10:12:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:00.157 10:12:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:00.157 10:12:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:00.157 10:12:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:00.157 10:12:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:00.725 10:12:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:00.725 10:12:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:00.725 10:12:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:00.725 10:12:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:00.725 10:12:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:00.725 10:12:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:00.725 10:12:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:00.725 10:12:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:00.725 10:12:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:00.725 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:00.725 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:00.725 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:00.725 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:00.725 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:00.725 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:00.725 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:00.725 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:00.725 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:00.725 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:00.725 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:00.725 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:00.725 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:00.725 ' 00:34:07.287 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:07.287 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:07.287 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:07.287 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:07.287 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:07.287 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:07.287 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:07.287 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:07.287 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:07.287 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:07.287 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:07.287 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:07.287 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:07.287 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:07.287 10:12:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:07.287 10:12:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:07.287 10:12:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:07.287 10:12:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 337711 00:34:07.287 10:12:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 337711 ']' 00:34:07.287 10:12:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 337711 00:34:07.287 10:12:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:34:07.287 10:12:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:07.287 10:12:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 337711 00:34:07.287 10:12:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:07.287 10:12:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:07.287 10:12:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 337711' 00:34:07.287 killing process with pid 337711 00:34:07.287 10:12:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 337711 00:34:07.287 10:12:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 337711 00:34:07.287 10:12:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:07.287 10:12:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:07.287 10:12:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 337711 ']' 00:34:07.287 10:12:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 337711 00:34:07.287 10:12:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 337711 ']' 00:34:07.287 10:12:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 337711 00:34:07.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (337711) - No such process 00:34:07.287 10:12:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 337711 is not found' 00:34:07.287 Process with pid 337711 is not found 00:34:07.287 10:12:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:07.287 10:12:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:07.287 10:12:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:07.287 00:34:07.287 real 0m17.360s 00:34:07.287 user 0m38.289s 00:34:07.287 sys 0m0.768s 00:34:07.287 10:12:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:07.287 10:12:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:07.287 ************************************ 00:34:07.287 END TEST spdkcli_nvmf_tcp 00:34:07.287 ************************************ 00:34:07.287 10:12:16 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:07.287 10:12:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:07.287 10:12:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:07.288 10:12:16 -- common/autotest_common.sh@10 -- # set +x 00:34:07.288 ************************************ 00:34:07.288 START TEST nvmf_identify_passthru 00:34:07.288 ************************************ 00:34:07.288 10:12:16 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:07.288 * Looking for test storage... 00:34:07.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:07.288 10:12:16 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:07.288 10:12:16 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:34:07.288 10:12:16 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:07.288 10:12:16 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:34:07.288 10:12:16 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:07.288 10:12:16 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:07.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.288 --rc genhtml_branch_coverage=1 00:34:07.288 --rc genhtml_function_coverage=1 00:34:07.288 --rc genhtml_legend=1 00:34:07.288 --rc geninfo_all_blocks=1 00:34:07.288 --rc geninfo_unexecuted_blocks=1 00:34:07.288 00:34:07.288 ' 00:34:07.288 10:12:16 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:07.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.288 --rc genhtml_branch_coverage=1 00:34:07.288 --rc genhtml_function_coverage=1 00:34:07.288 --rc genhtml_legend=1 00:34:07.288 --rc geninfo_all_blocks=1 00:34:07.288 --rc geninfo_unexecuted_blocks=1 00:34:07.288 00:34:07.288 ' 00:34:07.288 10:12:16 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:07.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.288 --rc genhtml_branch_coverage=1 00:34:07.288 --rc genhtml_function_coverage=1 00:34:07.288 --rc genhtml_legend=1 00:34:07.288 --rc geninfo_all_blocks=1 00:34:07.288 --rc geninfo_unexecuted_blocks=1 00:34:07.288 00:34:07.288 ' 00:34:07.288 10:12:16 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:07.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.288 --rc genhtml_branch_coverage=1 00:34:07.288 --rc genhtml_function_coverage=1 00:34:07.288 --rc genhtml_legend=1 00:34:07.288 --rc geninfo_all_blocks=1 00:34:07.288 --rc geninfo_unexecuted_blocks=1 00:34:07.288 00:34:07.288 ' 00:34:07.288 10:12:16 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.288 10:12:16 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.288 10:12:16 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.288 10:12:16 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.288 10:12:16 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:07.288 10:12:16 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:07.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:07.288 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:07.288 10:12:16 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.288 10:12:16 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.288 10:12:16 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.288 10:12:16 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.289 10:12:16 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.289 10:12:16 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:07.289 10:12:16 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.289 10:12:16 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:07.289 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:07.289 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:07.289 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:07.289 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:07.289 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:07.289 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.289 10:12:16 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:07.289 10:12:16 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.289 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:07.289 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:07.289 10:12:16 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:34:07.289 10:12:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:13.860 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:13.860 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:13.860 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:13.861 Found net devices under 0000:af:00.0: cvl_0_0 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:13.861 Found net devices under 0000:af:00.1: cvl_0_1 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:13.861 10:12:22 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:13.861 10:12:23 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:13.861 10:12:23 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:13.861 10:12:23 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:13.861 10:12:23 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:13.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:13.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:34:13.861 00:34:13.861 --- 10.0.0.2 ping statistics --- 00:34:13.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:13.861 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:34:13.861 10:12:23 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:13.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:13.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:34:13.861 00:34:13.861 --- 10.0.0.1 ping statistics --- 00:34:13.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:13.861 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:34:13.861 10:12:23 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:13.861 10:12:23 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:34:13.861 10:12:23 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:13.861 10:12:23 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:13.861 10:12:23 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:13.861 10:12:23 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:13.861 10:12:23 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:13.861 10:12:23 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:13.861 10:12:23 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:13.861 10:12:23 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:13.861 10:12:23 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:13.861 10:12:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:13.861 10:12:23 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:13.861 10:12:23 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:13.861 10:12:23 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:13.861 10:12:23 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:34:13.861 10:12:23 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:34:13.861 10:12:23 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:13.861 10:12:23 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:34:13.861 10:12:23 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:13.861 10:12:23 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:13.861 10:12:23 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:13.861 10:12:23 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:34:13.861 10:12:23 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:34:13.861 10:12:23 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:34:13.861 10:12:23 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:34:13.861 10:12:23 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:34:13.861 10:12:23 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:13.861 10:12:23 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:13.861 10:12:23 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:18.051 10:12:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ807001JM1P0FGN 00:34:18.051 10:12:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:18.051 10:12:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:18.051 10:12:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:22.238 10:12:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:22.238 10:12:31 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:22.238 10:12:31 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:22.238 10:12:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.238 10:12:31 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:22.238 10:12:31 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.238 10:12:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.238 10:12:31 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=345787 00:34:22.238 10:12:31 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:22.238 10:12:31 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:22.238 10:12:31 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 345787 00:34:22.238 10:12:31 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 345787 ']' 00:34:22.238 10:12:31 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.238 10:12:31 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.238 10:12:31 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.238 10:12:31 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.238 10:12:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.238 [2024-12-11 10:12:31.653640] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:34:22.238 [2024-12-11 10:12:31.653690] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.238 [2024-12-11 10:12:31.739595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:22.238 [2024-12-11 10:12:31.780366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.238 [2024-12-11 10:12:31.780404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.238 [2024-12-11 10:12:31.780411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.238 [2024-12-11 10:12:31.780417] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.238 [2024-12-11 10:12:31.780422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.238 [2024-12-11 10:12:31.781932] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:22.238 [2024-12-11 10:12:31.782038] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:34:22.238 [2024-12-11 10:12:31.782145] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.238 [2024-12-11 10:12:31.782146] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:34:23.172 10:12:32 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:23.172 10:12:32 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:34:23.172 10:12:32 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:23.172 10:12:32 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.172 10:12:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:23.172 INFO: Log level set to 20 00:34:23.172 INFO: Requests: 00:34:23.172 { 00:34:23.172 "jsonrpc": "2.0", 00:34:23.172 "method": "nvmf_set_config", 00:34:23.172 "id": 1, 00:34:23.172 "params": { 00:34:23.172 "admin_cmd_passthru": { 00:34:23.172 "identify_ctrlr": true 00:34:23.172 } 00:34:23.172 } 00:34:23.172 } 00:34:23.172 00:34:23.172 INFO: response: 00:34:23.172 { 00:34:23.172 "jsonrpc": "2.0", 00:34:23.172 "id": 1, 00:34:23.172 "result": true 00:34:23.172 } 00:34:23.172 00:34:23.172 10:12:32 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.172 10:12:32 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:23.172 10:12:32 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.172 10:12:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:23.172 INFO: Setting log level to 20 00:34:23.172 INFO: Setting log level to 20 00:34:23.172 INFO: Log level set to 20 00:34:23.172 INFO: Log level set to 20 00:34:23.172 INFO: Requests: 00:34:23.172 { 00:34:23.172 "jsonrpc": "2.0", 00:34:23.172 "method": "framework_start_init", 00:34:23.172 "id": 1 00:34:23.172 } 00:34:23.172 00:34:23.172 INFO: Requests: 00:34:23.172 { 00:34:23.172 "jsonrpc": "2.0", 00:34:23.172 "method": "framework_start_init", 00:34:23.172 "id": 1 00:34:23.172 } 00:34:23.172 00:34:23.172 [2024-12-11 10:12:32.568396] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:23.172 INFO: response: 00:34:23.172 { 00:34:23.172 "jsonrpc": "2.0", 00:34:23.172 "id": 1, 00:34:23.172 "result": true 00:34:23.172 } 00:34:23.172 00:34:23.172 INFO: response: 00:34:23.172 { 00:34:23.172 "jsonrpc": "2.0", 00:34:23.172 "id": 1, 00:34:23.172 "result": true 00:34:23.172 } 00:34:23.172 00:34:23.172 10:12:32 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.172 10:12:32 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:23.172 10:12:32 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.172 10:12:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:23.172 INFO: Setting log level to 40 00:34:23.172 INFO: Setting log level to 40 00:34:23.172 INFO: Setting log level to 40 00:34:23.172 [2024-12-11 10:12:32.581669] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:23.172 10:12:32 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.172 10:12:32 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:23.172 10:12:32 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:23.172 10:12:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:23.172 10:12:32 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:34:23.172 10:12:32 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.172 10:12:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:26.455 Nvme0n1 00:34:26.455 10:12:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.455 10:12:35 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:26.455 10:12:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.455 10:12:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:26.455 10:12:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.455 10:12:35 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:26.455 10:12:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.455 10:12:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:26.455 10:12:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.455 10:12:35 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:26.455 10:12:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.455 10:12:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:26.455 [2024-12-11 10:12:35.500716] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:26.455 10:12:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.455 10:12:35 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:26.455 10:12:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.455 10:12:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:26.455 [ 00:34:26.455 { 00:34:26.455 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:26.455 "subtype": "Discovery", 00:34:26.455 "listen_addresses": [], 00:34:26.455 "allow_any_host": true, 00:34:26.455 "hosts": [] 00:34:26.455 }, 00:34:26.455 { 00:34:26.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:26.455 "subtype": "NVMe", 00:34:26.455 "listen_addresses": [ 00:34:26.455 { 00:34:26.455 "trtype": "TCP", 00:34:26.455 "adrfam": "IPv4", 00:34:26.455 "traddr": "10.0.0.2", 00:34:26.455 "trsvcid": "4420" 00:34:26.455 } 00:34:26.455 ], 00:34:26.455 "allow_any_host": true, 00:34:26.455 "hosts": [], 00:34:26.455 "serial_number": "SPDK00000000000001", 00:34:26.455 "model_number": "SPDK bdev Controller", 00:34:26.455 "max_namespaces": 1, 00:34:26.455 "min_cntlid": 1, 00:34:26.455 "max_cntlid": 65519, 00:34:26.455 "namespaces": [ 00:34:26.455 { 00:34:26.455 "nsid": 1, 00:34:26.455 "bdev_name": "Nvme0n1", 00:34:26.455 "name": "Nvme0n1", 00:34:26.455 "nguid": "C493F9D1F8594566A330DCFAE5AF828D", 00:34:26.455 "uuid": "c493f9d1-f859-4566-a330-dcfae5af828d" 00:34:26.455 } 00:34:26.455 ] 00:34:26.455 } 00:34:26.455 ] 00:34:26.455 10:12:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.455 10:12:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:26.455 10:12:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:26.455 10:12:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:26.455 10:12:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ807001JM1P0FGN 00:34:26.455 10:12:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:26.455 10:12:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:26.455 10:12:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:26.455 10:12:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:26.455 10:12:35 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ807001JM1P0FGN '!=' BTLJ807001JM1P0FGN ']' 00:34:26.455 10:12:35 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:26.455 10:12:35 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:26.455 10:12:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.455 10:12:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:26.455 10:12:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.455 10:12:35 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:26.455 10:12:35 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:26.455 10:12:35 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:26.455 10:12:35 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:26.455 10:12:35 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:26.455 10:12:35 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:26.455 10:12:35 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:26.455 10:12:35 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:26.455 rmmod nvme_tcp 00:34:26.455 rmmod nvme_fabrics 00:34:26.713 rmmod nvme_keyring 00:34:26.713 10:12:36 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:26.713 10:12:36 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:26.713 10:12:36 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:26.713 10:12:36 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 345787 ']' 00:34:26.713 10:12:36 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 345787 00:34:26.713 10:12:36 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 345787 ']' 00:34:26.713 10:12:36 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 345787 00:34:26.713 10:12:36 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:34:26.713 10:12:36 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:26.713 10:12:36 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 345787 00:34:26.713 10:12:36 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:26.713 10:12:36 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:26.713 10:12:36 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 345787' 00:34:26.713 killing process with pid 345787 00:34:26.713 10:12:36 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 345787 00:34:26.713 10:12:36 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 345787 00:34:28.087 10:12:37 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:28.087 10:12:37 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:28.087 10:12:37 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:28.087 10:12:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:28.087 10:12:37 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:34:28.087 10:12:37 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:28.087 10:12:37 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:34:28.087 10:12:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:28.087 10:12:37 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:28.087 10:12:37 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.087 10:12:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:28.087 10:12:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.624 10:12:39 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:30.624 00:34:30.624 real 0m23.586s 00:34:30.624 user 0m30.241s 00:34:30.624 sys 0m6.925s 00:34:30.624 10:12:39 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:30.624 10:12:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:30.624 ************************************ 00:34:30.624 END TEST nvmf_identify_passthru 00:34:30.624 ************************************ 00:34:30.624 10:12:39 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:30.624 10:12:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:30.624 10:12:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:30.624 10:12:39 -- common/autotest_common.sh@10 -- # set +x 00:34:30.624 ************************************ 00:34:30.624 START TEST nvmf_dif 00:34:30.624 ************************************ 00:34:30.624 10:12:39 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:30.624 * Looking for test storage... 00:34:30.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:30.624 10:12:39 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:30.624 10:12:39 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:34:30.624 10:12:39 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:30.624 10:12:39 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:30.624 10:12:39 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:30.624 10:12:39 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:30.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.624 --rc genhtml_branch_coverage=1 00:34:30.624 --rc genhtml_function_coverage=1 00:34:30.624 --rc genhtml_legend=1 00:34:30.624 --rc geninfo_all_blocks=1 00:34:30.624 --rc geninfo_unexecuted_blocks=1 00:34:30.624 00:34:30.624 ' 00:34:30.624 10:12:39 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:30.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.624 --rc genhtml_branch_coverage=1 00:34:30.624 --rc genhtml_function_coverage=1 00:34:30.624 --rc genhtml_legend=1 00:34:30.624 --rc geninfo_all_blocks=1 00:34:30.624 --rc geninfo_unexecuted_blocks=1 00:34:30.624 00:34:30.624 ' 00:34:30.624 10:12:39 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:30.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.624 --rc genhtml_branch_coverage=1 00:34:30.624 --rc genhtml_function_coverage=1 00:34:30.624 --rc genhtml_legend=1 00:34:30.624 --rc geninfo_all_blocks=1 00:34:30.624 --rc geninfo_unexecuted_blocks=1 00:34:30.624 00:34:30.624 ' 00:34:30.624 10:12:39 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:30.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.624 --rc genhtml_branch_coverage=1 00:34:30.624 --rc genhtml_function_coverage=1 00:34:30.624 --rc genhtml_legend=1 00:34:30.624 --rc geninfo_all_blocks=1 00:34:30.624 --rc geninfo_unexecuted_blocks=1 00:34:30.624 00:34:30.624 ' 00:34:30.624 10:12:39 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:30.624 10:12:39 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:30.624 10:12:39 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.624 10:12:39 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.624 10:12:39 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.624 10:12:39 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:30.624 10:12:39 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:30.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:30.624 10:12:39 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:30.624 10:12:39 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:30.624 10:12:39 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:30.624 10:12:39 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:30.624 10:12:39 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:30.624 10:12:39 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:30.625 10:12:39 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.625 10:12:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:30.625 10:12:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.625 10:12:39 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:30.625 10:12:39 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:30.625 10:12:39 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:30.625 10:12:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:37.195 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:37.195 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:37.195 Found net devices under 0000:af:00.0: cvl_0_0 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:37.195 Found net devices under 0000:af:00.1: cvl_0_1 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:37.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:37.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:34:37.195 00:34:37.195 --- 10.0.0.2 ping statistics --- 00:34:37.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.195 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:37.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:37.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:34:37.195 00:34:37.195 --- 10.0.0.1 ping statistics --- 00:34:37.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.195 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:37.195 10:12:46 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:40.484 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:34:40.484 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:40.484 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:40.484 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:40.484 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:40.484 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:40.484 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:40.484 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:40.484 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:40.484 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:40.484 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:40.484 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:40.484 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:40.484 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:40.484 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:40.484 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:40.484 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:40.484 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:40.484 10:12:49 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:40.484 10:12:49 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:40.484 10:12:49 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:40.484 10:12:49 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:40.484 10:12:49 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:40.484 10:12:49 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:40.484 10:12:50 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:40.484 10:12:50 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:40.484 10:12:50 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:40.484 10:12:50 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:40.485 10:12:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:40.485 10:12:50 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=352058 00:34:40.485 10:12:50 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:40.485 10:12:50 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 352058 00:34:40.485 10:12:50 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 352058 ']' 00:34:40.485 10:12:50 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.485 10:12:50 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:40.485 10:12:50 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:40.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:40.485 10:12:50 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:40.485 10:12:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:40.744 [2024-12-11 10:12:50.068961] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:34:40.744 [2024-12-11 10:12:50.069010] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:40.744 [2024-12-11 10:12:50.140143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:40.744 [2024-12-11 10:12:50.180761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:40.744 [2024-12-11 10:12:50.180797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:40.744 [2024-12-11 10:12:50.180804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:40.744 [2024-12-11 10:12:50.180811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:40.744 [2024-12-11 10:12:50.180816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:40.744 [2024-12-11 10:12:50.181357] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:40.744 10:12:50 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:40.744 10:12:50 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:34:40.744 10:12:50 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:40.744 10:12:50 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:40.744 10:12:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 10:12:50 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:41.003 10:12:50 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:41.003 10:12:50 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:41.003 10:12:50 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.003 10:12:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 [2024-12-11 10:12:50.325389] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:41.003 10:12:50 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.003 10:12:50 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:41.003 10:12:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:41.003 10:12:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:41.003 10:12:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 ************************************ 00:34:41.003 START TEST fio_dif_1_default 00:34:41.003 ************************************ 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 bdev_null0 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 [2024-12-11 10:12:50.401731] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:41.003 { 00:34:41.003 "params": { 00:34:41.003 "name": "Nvme$subsystem", 00:34:41.003 "trtype": "$TEST_TRANSPORT", 00:34:41.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.003 "adrfam": "ipv4", 00:34:41.003 "trsvcid": "$NVMF_PORT", 00:34:41.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.003 "hdgst": ${hdgst:-false}, 00:34:41.003 "ddgst": ${ddgst:-false} 00:34:41.003 }, 00:34:41.003 "method": "bdev_nvme_attach_controller" 00:34:41.003 } 00:34:41.003 EOF 00:34:41.003 )") 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:41.003 "params": { 00:34:41.003 "name": "Nvme0", 00:34:41.003 "trtype": "tcp", 00:34:41.003 "traddr": "10.0.0.2", 00:34:41.003 "adrfam": "ipv4", 00:34:41.003 "trsvcid": "4420", 00:34:41.003 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:41.003 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:41.003 "hdgst": false, 00:34:41.003 "ddgst": false 00:34:41.003 }, 00:34:41.003 "method": "bdev_nvme_attach_controller" 00:34:41.003 }' 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:41.003 10:12:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.260 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:41.260 fio-3.35 00:34:41.260 Starting 1 thread 00:34:53.464 00:34:53.464 filename0: (groupid=0, jobs=1): err= 0: pid=352426: Wed Dec 11 10:13:01 2024 00:34:53.464 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:34:53.464 slat (nsec): min=5881, max=30945, avg=6222.53, stdev=1277.30 00:34:53.464 clat (usec): min=40802, max=42352, avg=40996.79, stdev=135.12 00:34:53.464 lat (usec): min=40808, max=42383, avg=41003.01, stdev=135.49 00:34:53.464 clat percentiles (usec): 00:34:53.464 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:53.464 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:53.464 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:53.464 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:53.464 | 99.99th=[42206] 00:34:53.464 bw ( KiB/s): min= 384, max= 416, per=99.46%, avg=388.80, stdev=11.72, samples=20 00:34:53.464 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:53.464 lat (msec) : 50=100.00% 00:34:53.464 cpu : usr=92.03%, sys=7.73%, ctx=14, majf=0, minf=0 00:34:53.464 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.464 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.464 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:53.464 00:34:53.464 Run status group 0 (all jobs): 00:34:53.464 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10008-10008msec 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.464 00:34:53.464 real 0m11.358s 00:34:53.464 user 0m15.679s 00:34:53.464 sys 0m1.149s 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:53.464 ************************************ 00:34:53.464 END TEST fio_dif_1_default 00:34:53.464 ************************************ 00:34:53.464 10:13:01 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:53.464 10:13:01 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:53.464 10:13:01 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:53.464 10:13:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:53.464 ************************************ 00:34:53.464 START TEST fio_dif_1_multi_subsystems 00:34:53.464 ************************************ 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:53.464 bdev_null0 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:53.464 [2024-12-11 10:13:01.829540] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.464 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:53.464 bdev_null1 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:53.465 { 00:34:53.465 "params": { 00:34:53.465 "name": "Nvme$subsystem", 00:34:53.465 "trtype": "$TEST_TRANSPORT", 00:34:53.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:53.465 "adrfam": "ipv4", 00:34:53.465 "trsvcid": "$NVMF_PORT", 00:34:53.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:53.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:53.465 "hdgst": ${hdgst:-false}, 00:34:53.465 "ddgst": ${ddgst:-false} 00:34:53.465 }, 00:34:53.465 "method": "bdev_nvme_attach_controller" 00:34:53.465 } 00:34:53.465 EOF 00:34:53.465 )") 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:53.465 { 00:34:53.465 "params": { 00:34:53.465 "name": "Nvme$subsystem", 00:34:53.465 "trtype": "$TEST_TRANSPORT", 00:34:53.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:53.465 "adrfam": "ipv4", 00:34:53.465 "trsvcid": "$NVMF_PORT", 00:34:53.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:53.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:53.465 "hdgst": ${hdgst:-false}, 00:34:53.465 "ddgst": ${ddgst:-false} 00:34:53.465 }, 00:34:53.465 "method": "bdev_nvme_attach_controller" 00:34:53.465 } 00:34:53.465 EOF 00:34:53.465 )") 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:53.465 "params": { 00:34:53.465 "name": "Nvme0", 00:34:53.465 "trtype": "tcp", 00:34:53.465 "traddr": "10.0.0.2", 00:34:53.465 "adrfam": "ipv4", 00:34:53.465 "trsvcid": "4420", 00:34:53.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:53.465 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:53.465 "hdgst": false, 00:34:53.465 "ddgst": false 00:34:53.465 }, 00:34:53.465 "method": "bdev_nvme_attach_controller" 00:34:53.465 },{ 00:34:53.465 "params": { 00:34:53.465 "name": "Nvme1", 00:34:53.465 "trtype": "tcp", 00:34:53.465 "traddr": "10.0.0.2", 00:34:53.465 "adrfam": "ipv4", 00:34:53.465 "trsvcid": "4420", 00:34:53.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:53.465 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:53.465 "hdgst": false, 00:34:53.465 "ddgst": false 00:34:53.465 }, 00:34:53.465 "method": "bdev_nvme_attach_controller" 00:34:53.465 }' 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:53.465 10:13:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:53.465 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:53.465 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:53.465 fio-3.35 00:34:53.465 Starting 2 threads 00:35:03.562 00:35:03.562 filename0: (groupid=0, jobs=1): err= 0: pid=354365: Wed Dec 11 10:13:12 2024 00:35:03.562 read: IOPS=202, BW=810KiB/s (830kB/s)(8128KiB/10029msec) 00:35:03.562 slat (nsec): min=6021, max=51514, avg=8528.99, stdev=4740.65 00:35:03.562 clat (usec): min=378, max=42536, avg=19715.18, stdev=20374.18 00:35:03.562 lat (usec): min=385, max=42545, avg=19723.71, stdev=20372.93 00:35:03.562 clat percentiles (usec): 00:35:03.562 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 412], 20.00th=[ 424], 00:35:03.562 | 30.00th=[ 433], 40.00th=[ 445], 50.00th=[ 537], 60.00th=[40633], 00:35:03.562 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:35:03.562 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:35:03.562 | 99.99th=[42730] 00:35:03.562 bw ( KiB/s): min= 672, max= 1088, per=48.14%, avg=811.20, stdev=93.02, samples=20 00:35:03.562 iops : min= 168, max= 272, avg=202.80, stdev=23.26, samples=20 00:35:03.562 lat (usec) : 500=48.28%, 750=4.48% 00:35:03.562 lat (msec) : 50=47.24% 00:35:03.562 cpu : usr=97.77%, sys=1.92%, ctx=21, majf=0, minf=0 00:35:03.562 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:03.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.562 issued rwts: total=2032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.562 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:03.562 filename1: (groupid=0, jobs=1): err= 0: pid=354366: Wed Dec 11 10:13:12 2024 00:35:03.562 read: IOPS=218, BW=876KiB/s (897kB/s)(8768KiB/10014msec) 00:35:03.562 slat (nsec): min=6081, max=51119, avg=8280.10, stdev=4841.19 00:35:03.562 clat (usec): min=379, max=42588, avg=18247.93, stdev=20380.62 00:35:03.562 lat (usec): min=385, max=42595, avg=18256.21, stdev=20379.42 00:35:03.562 clat percentiles (usec): 00:35:03.562 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 416], 20.00th=[ 433], 00:35:03.562 | 30.00th=[ 453], 40.00th=[ 474], 50.00th=[ 498], 60.00th=[40633], 00:35:03.562 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:35:03.562 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:35:03.562 | 99.99th=[42730] 00:35:03.562 bw ( KiB/s): min= 672, max= 1024, per=51.94%, avg=875.20, stdev=105.00, samples=20 00:35:03.562 iops : min= 168, max= 256, avg=218.80, stdev=26.25, samples=20 00:35:03.562 lat (usec) : 500=50.41%, 750=5.70%, 1000=0.50% 00:35:03.562 lat (msec) : 2=0.14%, 50=43.25% 00:35:03.562 cpu : usr=97.66%, sys=2.06%, ctx=8, majf=0, minf=9 00:35:03.562 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:03.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.562 issued rwts: total=2192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.562 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:03.562 00:35:03.562 Run status group 0 (all jobs): 00:35:03.562 READ: bw=1685KiB/s (1725kB/s), 810KiB/s-876KiB/s (830kB/s-897kB/s), io=16.5MiB (17.3MB), run=10014-10029msec 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:03.821 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.822 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:03.822 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.822 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:03.822 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.822 00:35:03.822 real 0m11.430s 00:35:03.822 user 0m26.252s 00:35:03.822 sys 0m0.803s 00:35:03.822 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:03.822 10:13:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:03.822 ************************************ 00:35:03.822 END TEST fio_dif_1_multi_subsystems 00:35:03.822 ************************************ 00:35:03.822 10:13:13 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:03.822 10:13:13 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:03.822 10:13:13 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:03.822 10:13:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:03.822 ************************************ 00:35:03.822 START TEST fio_dif_rand_params 00:35:03.822 ************************************ 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:03.822 bdev_null0 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:03.822 [2024-12-11 10:13:13.334687] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:03.822 { 00:35:03.822 "params": { 00:35:03.822 "name": "Nvme$subsystem", 00:35:03.822 "trtype": "$TEST_TRANSPORT", 00:35:03.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:03.822 "adrfam": "ipv4", 00:35:03.822 "trsvcid": "$NVMF_PORT", 00:35:03.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:03.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:03.822 "hdgst": ${hdgst:-false}, 00:35:03.822 "ddgst": ${ddgst:-false} 00:35:03.822 }, 00:35:03.822 "method": "bdev_nvme_attach_controller" 00:35:03.822 } 00:35:03.822 EOF 00:35:03.822 )") 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:03.822 "params": { 00:35:03.822 "name": "Nvme0", 00:35:03.822 "trtype": "tcp", 00:35:03.822 "traddr": "10.0.0.2", 00:35:03.822 "adrfam": "ipv4", 00:35:03.822 "trsvcid": "4420", 00:35:03.822 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:03.822 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:03.822 "hdgst": false, 00:35:03.822 "ddgst": false 00:35:03.822 }, 00:35:03.822 "method": "bdev_nvme_attach_controller" 00:35:03.822 }' 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:03.822 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:04.104 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:04.104 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:04.104 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:04.104 10:13:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:04.368 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:04.368 ... 00:35:04.368 fio-3.35 00:35:04.368 Starting 3 threads 00:35:10.987 00:35:10.987 filename0: (groupid=0, jobs=1): err= 0: pid=356318: Wed Dec 11 10:13:19 2024 00:35:10.987 read: IOPS=330, BW=41.3MiB/s (43.3MB/s)(209MiB/5047msec) 00:35:10.987 slat (nsec): min=6283, max=31128, avg=10783.08, stdev=1959.38 00:35:10.987 clat (usec): min=3604, max=51360, avg=9038.80, stdev=4753.81 00:35:10.987 lat (usec): min=3610, max=51367, avg=9049.58, stdev=4753.86 00:35:10.987 clat percentiles (usec): 00:35:10.987 | 1.00th=[ 4015], 5.00th=[ 6063], 10.00th=[ 7046], 20.00th=[ 7767], 00:35:10.987 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8848], 00:35:10.987 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10159], 95.00th=[10814], 00:35:10.987 | 99.00th=[44827], 99.50th=[49021], 99.90th=[51119], 99.95th=[51119], 00:35:10.987 | 99.99th=[51119] 00:35:10.987 bw ( KiB/s): min=39168, max=47104, per=36.28%, avg=42624.00, stdev=2807.58, samples=10 00:35:10.987 iops : min= 306, max= 368, avg=333.00, stdev=21.93, samples=10 00:35:10.987 lat (msec) : 4=0.84%, 10=86.93%, 20=10.85%, 50=1.08%, 100=0.30% 00:35:10.987 cpu : usr=94.45%, sys=5.27%, ctx=7, majf=0, minf=11 00:35:10.987 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.987 issued rwts: total=1668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.987 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:10.987 filename0: (groupid=0, jobs=1): err= 0: pid=356319: Wed Dec 11 10:13:19 2024 00:35:10.987 read: IOPS=308, BW=38.6MiB/s (40.5MB/s)(195MiB/5045msec) 00:35:10.987 slat (nsec): min=6271, max=27708, avg=10985.75, stdev=1866.89 00:35:10.987 clat (usec): min=3648, max=47496, avg=9679.84, stdev=3779.79 00:35:10.987 lat (usec): min=3654, max=47508, avg=9690.82, stdev=3779.88 00:35:10.987 clat percentiles (usec): 00:35:10.987 | 1.00th=[ 4080], 5.00th=[ 6390], 10.00th=[ 7242], 20.00th=[ 8291], 00:35:10.987 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:35:10.987 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11469], 95.00th=[11863], 00:35:10.988 | 99.00th=[13042], 99.50th=[45351], 99.90th=[46924], 99.95th=[47449], 00:35:10.988 | 99.99th=[47449] 00:35:10.988 bw ( KiB/s): min=32256, max=44800, per=33.86%, avg=39782.40, stdev=3305.39, samples=10 00:35:10.988 iops : min= 252, max= 350, avg=310.80, stdev=25.82, samples=10 00:35:10.988 lat (msec) : 4=0.83%, 10=63.39%, 20=34.87%, 50=0.90% 00:35:10.988 cpu : usr=94.41%, sys=5.31%, ctx=6, majf=0, minf=9 00:35:10.988 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.988 issued rwts: total=1557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.988 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:10.988 filename0: (groupid=0, jobs=1): err= 0: pid=356320: Wed Dec 11 10:13:19 2024 00:35:10.988 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(176MiB/5003msec) 00:35:10.988 slat (nsec): min=6250, max=29107, avg=11094.28, stdev=1787.06 00:35:10.988 clat (usec): min=3542, max=88835, avg=10646.14, stdev=5705.10 00:35:10.988 lat (usec): min=3548, max=88848, avg=10657.24, stdev=5705.09 00:35:10.988 clat percentiles (usec): 00:35:10.988 | 1.00th=[ 5669], 5.00th=[ 6915], 10.00th=[ 8029], 20.00th=[ 8848], 00:35:10.988 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10421], 00:35:10.988 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11731], 95.00th=[12387], 00:35:10.988 | 99.00th=[48497], 99.50th=[50070], 99.90th=[50594], 99.95th=[88605], 00:35:10.988 | 99.99th=[88605] 00:35:10.988 bw ( KiB/s): min=27392, max=38912, per=30.63%, avg=35993.60, stdev=3124.18, samples=10 00:35:10.988 iops : min= 214, max= 304, avg=281.20, stdev=24.41, samples=10 00:35:10.988 lat (msec) : 4=0.85%, 10=45.45%, 20=51.85%, 50=1.35%, 100=0.50% 00:35:10.988 cpu : usr=94.20%, sys=5.52%, ctx=6, majf=0, minf=9 00:35:10.988 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.988 issued rwts: total=1408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.988 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:10.988 00:35:10.988 Run status group 0 (all jobs): 00:35:10.988 READ: bw=115MiB/s (120MB/s), 35.2MiB/s-41.3MiB/s (36.9MB/s-43.3MB/s), io=579MiB (607MB), run=5003-5047msec 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.988 bdev_null0 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.988 [2024-12-11 10:13:19.780707] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.988 bdev_null1 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.988 bdev_null2 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.988 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:10.989 { 00:35:10.989 "params": { 00:35:10.989 "name": "Nvme$subsystem", 00:35:10.989 "trtype": "$TEST_TRANSPORT", 00:35:10.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.989 "adrfam": "ipv4", 00:35:10.989 "trsvcid": "$NVMF_PORT", 00:35:10.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.989 "hdgst": ${hdgst:-false}, 00:35:10.989 "ddgst": ${ddgst:-false} 00:35:10.989 }, 00:35:10.989 "method": "bdev_nvme_attach_controller" 00:35:10.989 } 00:35:10.989 EOF 00:35:10.989 )") 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:10.989 { 00:35:10.989 "params": { 00:35:10.989 "name": "Nvme$subsystem", 00:35:10.989 "trtype": "$TEST_TRANSPORT", 00:35:10.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.989 "adrfam": "ipv4", 00:35:10.989 "trsvcid": "$NVMF_PORT", 00:35:10.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.989 "hdgst": ${hdgst:-false}, 00:35:10.989 "ddgst": ${ddgst:-false} 00:35:10.989 }, 00:35:10.989 "method": "bdev_nvme_attach_controller" 00:35:10.989 } 00:35:10.989 EOF 00:35:10.989 )") 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:10.989 { 00:35:10.989 "params": { 00:35:10.989 "name": "Nvme$subsystem", 00:35:10.989 "trtype": "$TEST_TRANSPORT", 00:35:10.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.989 "adrfam": "ipv4", 00:35:10.989 "trsvcid": "$NVMF_PORT", 00:35:10.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.989 "hdgst": ${hdgst:-false}, 00:35:10.989 "ddgst": ${ddgst:-false} 00:35:10.989 }, 00:35:10.989 "method": "bdev_nvme_attach_controller" 00:35:10.989 } 00:35:10.989 EOF 00:35:10.989 )") 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:10.989 "params": { 00:35:10.989 "name": "Nvme0", 00:35:10.989 "trtype": "tcp", 00:35:10.989 "traddr": "10.0.0.2", 00:35:10.989 "adrfam": "ipv4", 00:35:10.989 "trsvcid": "4420", 00:35:10.989 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.989 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:10.989 "hdgst": false, 00:35:10.989 "ddgst": false 00:35:10.989 }, 00:35:10.989 "method": "bdev_nvme_attach_controller" 00:35:10.989 },{ 00:35:10.989 "params": { 00:35:10.989 "name": "Nvme1", 00:35:10.989 "trtype": "tcp", 00:35:10.989 "traddr": "10.0.0.2", 00:35:10.989 "adrfam": "ipv4", 00:35:10.989 "trsvcid": "4420", 00:35:10.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:10.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:10.989 "hdgst": false, 00:35:10.989 "ddgst": false 00:35:10.989 }, 00:35:10.989 "method": "bdev_nvme_attach_controller" 00:35:10.989 },{ 00:35:10.989 "params": { 00:35:10.989 "name": "Nvme2", 00:35:10.989 "trtype": "tcp", 00:35:10.989 "traddr": "10.0.0.2", 00:35:10.989 "adrfam": "ipv4", 00:35:10.989 "trsvcid": "4420", 00:35:10.989 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:10.989 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:10.989 "hdgst": false, 00:35:10.989 "ddgst": false 00:35:10.989 }, 00:35:10.989 "method": "bdev_nvme_attach_controller" 00:35:10.989 }' 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:10.989 10:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.989 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:10.989 ... 00:35:10.989 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:10.989 ... 00:35:10.989 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:10.989 ... 00:35:10.989 fio-3.35 00:35:10.989 Starting 24 threads 00:35:23.190 00:35:23.190 filename0: (groupid=0, jobs=1): err= 0: pid=357408: Wed Dec 11 10:13:31 2024 00:35:23.190 read: IOPS=62, BW=250KiB/s (256kB/s)(2536KiB/10138msec) 00:35:23.190 slat (nsec): min=7483, max=32682, avg=9458.45, stdev=3022.61 00:35:23.190 clat (msec): min=62, max=427, avg=254.89, stdev=61.94 00:35:23.190 lat (msec): min=62, max=427, avg=254.90, stdev=61.94 00:35:23.190 clat percentiles (msec): 00:35:23.190 | 1.00th=[ 63], 5.00th=[ 112], 10.00th=[ 180], 20.00th=[ 243], 00:35:23.190 | 30.00th=[ 257], 40.00th=[ 266], 50.00th=[ 268], 60.00th=[ 271], 00:35:23.190 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 321], 95.00th=[ 334], 00:35:23.190 | 99.00th=[ 409], 99.50th=[ 426], 99.90th=[ 426], 99.95th=[ 426], 00:35:23.190 | 99.99th=[ 426] 00:35:23.190 bw ( KiB/s): min= 175, max= 432, per=4.53%, avg=246.70, stdev=55.25, samples=20 00:35:23.190 iops : min= 43, max= 108, avg=61.30, stdev=13.96, samples=20 00:35:23.190 lat (msec) : 100=4.73%, 250=15.77%, 500=79.50% 00:35:23.190 cpu : usr=98.68%, sys=0.93%, ctx=12, majf=0, minf=20 00:35:23.190 IO depths : 1=0.2%, 2=0.8%, 4=7.7%, 8=78.7%, 16=12.6%, 32=0.0%, >=64=0.0% 00:35:23.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.190 complete : 0=0.0%, 4=89.1%, 8=5.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.190 issued rwts: total=634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.190 filename0: (groupid=0, jobs=1): err= 0: pid=357410: Wed Dec 11 10:13:31 2024 00:35:23.190 read: IOPS=59, BW=239KiB/s (245kB/s)(2424KiB/10139msec) 00:35:23.190 slat (nsec): min=7514, max=90348, avg=11789.84, stdev=11850.15 00:35:23.190 clat (msec): min=88, max=433, avg=266.68, stdev=59.99 00:35:23.190 lat (msec): min=88, max=433, avg=266.69, stdev=59.98 00:35:23.190 clat percentiles (msec): 00:35:23.190 | 1.00th=[ 89], 5.00th=[ 157], 10.00th=[ 218], 20.00th=[ 232], 00:35:23.190 | 30.00th=[ 257], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:35:23.190 | 70.00th=[ 275], 80.00th=[ 300], 90.00th=[ 313], 95.00th=[ 388], 00:35:23.190 | 99.00th=[ 422], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:35:23.190 | 99.99th=[ 435] 00:35:23.190 bw ( KiB/s): min= 175, max= 336, per=4.32%, avg=235.50, stdev=44.14, samples=20 00:35:23.190 iops : min= 43, max= 84, avg=58.50, stdev=11.14, samples=20 00:35:23.190 lat (msec) : 100=2.31%, 250=23.43%, 500=74.26% 00:35:23.190 cpu : usr=98.63%, sys=0.98%, ctx=13, majf=0, minf=21 00:35:23.190 IO depths : 1=0.2%, 2=0.7%, 4=6.9%, 8=79.4%, 16=12.9%, 32=0.0%, >=64=0.0% 00:35:23.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.190 complete : 0=0.0%, 4=88.8%, 8=6.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.190 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.190 filename0: (groupid=0, jobs=1): err= 0: pid=357411: Wed Dec 11 10:13:31 2024 00:35:23.190 read: IOPS=63, BW=252KiB/s (258kB/s)(2560KiB/10152msec) 00:35:23.190 slat (nsec): min=7530, max=32618, avg=9523.23, stdev=3148.85 00:35:23.190 clat (msec): min=61, max=360, avg=253.41, stdev=48.43 00:35:23.190 lat (msec): min=61, max=360, avg=253.42, stdev=48.43 00:35:23.190 clat percentiles (msec): 00:35:23.190 | 1.00th=[ 63], 5.00th=[ 88], 10.00th=[ 203], 20.00th=[ 247], 00:35:23.190 | 30.00th=[ 262], 40.00th=[ 266], 50.00th=[ 268], 60.00th=[ 271], 00:35:23.190 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 279], 00:35:23.190 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:35:23.190 | 99.99th=[ 359] 00:35:23.190 bw ( KiB/s): min= 128, max= 384, per=4.58%, avg=249.10, stdev=60.77, samples=20 00:35:23.190 iops : min= 32, max= 96, avg=61.90, stdev=15.30, samples=20 00:35:23.190 lat (msec) : 100=5.00%, 250=16.88%, 500=78.12% 00:35:23.190 cpu : usr=98.71%, sys=0.89%, ctx=13, majf=0, minf=21 00:35:23.190 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:35:23.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.190 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.190 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.190 filename0: (groupid=0, jobs=1): err= 0: pid=357412: Wed Dec 11 10:13:31 2024 00:35:23.190 read: IOPS=58, BW=234KiB/s (239kB/s)(2368KiB/10132msec) 00:35:23.190 slat (nsec): min=6448, max=62038, avg=20381.91, stdev=14384.01 00:35:23.190 clat (msec): min=154, max=385, avg=273.38, stdev=37.40 00:35:23.190 lat (msec): min=154, max=385, avg=273.40, stdev=37.41 00:35:23.190 clat percentiles (msec): 00:35:23.190 | 1.00th=[ 186], 5.00th=[ 215], 10.00th=[ 243], 20.00th=[ 259], 00:35:23.190 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:35:23.190 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 334], 95.00th=[ 376], 00:35:23.190 | 99.00th=[ 384], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 384], 00:35:23.190 | 99.99th=[ 384] 00:35:23.190 bw ( KiB/s): min= 128, max= 368, per=4.21%, avg=229.95, stdev=59.29, samples=20 00:35:23.190 iops : min= 32, max= 92, avg=57.15, stdev=14.84, samples=20 00:35:23.190 lat (msec) : 250=12.50%, 500=87.50% 00:35:23.190 cpu : usr=98.93%, sys=0.64%, ctx=27, majf=0, minf=17 00:35:23.190 IO depths : 1=1.0%, 2=7.3%, 4=25.0%, 8=55.2%, 16=11.5%, 32=0.0%, >=64=0.0% 00:35:23.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.190 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.190 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.190 filename0: (groupid=0, jobs=1): err= 0: pid=357413: Wed Dec 11 10:13:31 2024 00:35:23.190 read: IOPS=59, BW=238KiB/s (244kB/s)(2416KiB/10131msec) 00:35:23.190 slat (nsec): min=7540, max=30511, avg=9856.14, stdev=3244.04 00:35:23.190 clat (msec): min=214, max=431, avg=267.12, stdev=23.98 00:35:23.190 lat (msec): min=214, max=431, avg=267.13, stdev=23.98 00:35:23.190 clat percentiles (msec): 00:35:23.190 | 1.00th=[ 215], 5.00th=[ 239], 10.00th=[ 251], 20.00th=[ 257], 00:35:23.190 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 268], 60.00th=[ 271], 00:35:23.190 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 288], 00:35:23.190 | 99.00th=[ 380], 99.50th=[ 430], 99.90th=[ 430], 99.95th=[ 430], 00:35:23.190 | 99.99th=[ 430] 00:35:23.190 bw ( KiB/s): min= 143, max= 256, per=4.36%, avg=237.10, stdev=34.24, samples=20 00:35:23.190 iops : min= 35, max= 64, avg=58.90, stdev= 8.64, samples=20 00:35:23.190 lat (msec) : 250=11.09%, 500=88.91% 00:35:23.190 cpu : usr=98.67%, sys=0.94%, ctx=13, majf=0, minf=20 00:35:23.190 IO depths : 1=0.7%, 2=1.5%, 4=8.6%, 8=77.3%, 16=11.9%, 32=0.0%, >=64=0.0% 00:35:23.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.190 complete : 0=0.0%, 4=89.4%, 8=5.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.190 issued rwts: total=604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.191 filename0: (groupid=0, jobs=1): err= 0: pid=357414: Wed Dec 11 10:13:31 2024 00:35:23.191 read: IOPS=58, BW=233KiB/s (239kB/s)(2360KiB/10119msec) 00:35:23.191 slat (nsec): min=5803, max=75437, avg=15608.86, stdev=14864.77 00:35:23.191 clat (msec): min=199, max=397, avg=273.91, stdev=34.89 00:35:23.191 lat (msec): min=199, max=397, avg=273.92, stdev=34.90 00:35:23.191 clat percentiles (msec): 00:35:23.191 | 1.00th=[ 199], 5.00th=[ 224], 10.00th=[ 247], 20.00th=[ 259], 00:35:23.191 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:35:23.191 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 321], 95.00th=[ 380], 00:35:23.191 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:35:23.191 | 99.99th=[ 397] 00:35:23.191 bw ( KiB/s): min= 128, max= 256, per=4.23%, avg=230.00, stdev=47.00, samples=20 00:35:23.191 iops : min= 32, max= 64, avg=57.20, stdev=11.88, samples=20 00:35:23.191 lat (msec) : 250=10.17%, 500=89.83% 00:35:23.191 cpu : usr=98.65%, sys=0.96%, ctx=13, majf=0, minf=25 00:35:23.191 IO depths : 1=1.0%, 2=7.3%, 4=25.1%, 8=55.3%, 16=11.4%, 32=0.0%, >=64=0.0% 00:35:23.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.191 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.191 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.191 filename0: (groupid=0, jobs=1): err= 0: pid=357415: Wed Dec 11 10:13:31 2024 00:35:23.191 read: IOPS=58, BW=235KiB/s (240kB/s)(2376KiB/10119msec) 00:35:23.191 slat (nsec): min=4665, max=18846, avg=9113.19, stdev=2104.75 00:35:23.191 clat (msec): min=133, max=631, avg=272.15, stdev=52.32 00:35:23.191 lat (msec): min=133, max=631, avg=272.15, stdev=52.32 00:35:23.191 clat percentiles (msec): 00:35:23.191 | 1.00th=[ 134], 5.00th=[ 209], 10.00th=[ 245], 20.00th=[ 259], 00:35:23.191 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 268], 60.00th=[ 271], 00:35:23.191 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 351], 00:35:23.191 | 99.00th=[ 506], 99.50th=[ 506], 99.90th=[ 634], 99.95th=[ 634], 00:35:23.191 | 99.99th=[ 634] 00:35:23.191 bw ( KiB/s): min= 175, max= 304, per=4.45%, avg=242.95, stdev=31.41, samples=19 00:35:23.191 iops : min= 43, max= 76, avg=60.42, stdev= 7.90, samples=19 00:35:23.191 lat (msec) : 250=11.78%, 500=85.52%, 750=2.69% 00:35:23.191 cpu : usr=98.63%, sys=0.99%, ctx=11, majf=0, minf=20 00:35:23.191 IO depths : 1=0.2%, 2=0.5%, 4=7.1%, 8=79.8%, 16=12.5%, 32=0.0%, >=64=0.0% 00:35:23.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.191 complete : 0=0.0%, 4=88.9%, 8=5.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.191 issued rwts: total=594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.191 filename0: (groupid=0, jobs=1): err= 0: pid=357416: Wed Dec 11 10:13:31 2024 00:35:23.191 read: IOPS=54, BW=220KiB/s (225kB/s)(2224KiB/10111msec) 00:35:23.191 slat (nsec): min=4839, max=70585, avg=12277.39, stdev=10790.77 00:35:23.191 clat (msec): min=214, max=512, avg=288.92, stdev=59.72 00:35:23.191 lat (msec): min=214, max=512, avg=288.93, stdev=59.72 00:35:23.191 clat percentiles (msec): 00:35:23.191 | 1.00th=[ 215], 5.00th=[ 239], 10.00th=[ 253], 20.00th=[ 264], 00:35:23.191 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:35:23.191 | 70.00th=[ 275], 80.00th=[ 275], 90.00th=[ 388], 95.00th=[ 430], 00:35:23.191 | 99.00th=[ 510], 99.50th=[ 510], 99.90th=[ 514], 99.95th=[ 514], 00:35:23.191 | 99.99th=[ 514] 00:35:23.191 bw ( KiB/s): min= 127, max= 368, per=4.25%, avg=231.16, stdev=57.07, samples=19 00:35:23.191 iops : min= 31, max= 92, avg=57.47, stdev=14.48, samples=19 00:35:23.191 lat (msec) : 250=9.35%, 500=87.41%, 750=3.24% 00:35:23.191 cpu : usr=98.73%, sys=0.87%, ctx=13, majf=0, minf=22 00:35:23.191 IO depths : 1=0.7%, 2=3.1%, 4=13.1%, 8=71.2%, 16=11.9%, 32=0.0%, >=64=0.0% 00:35:23.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.191 complete : 0=0.0%, 4=90.7%, 8=3.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.191 issued rwts: total=556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.191 filename1: (groupid=0, jobs=1): err= 0: pid=357417: Wed Dec 11 10:13:31 2024 00:35:23.191 read: IOPS=61, BW=247KiB/s (253kB/s)(2504KiB/10139msec) 00:35:23.191 slat (nsec): min=7515, max=32688, avg=9379.79, stdev=3054.48 00:35:23.191 clat (msec): min=87, max=438, avg=257.81, stdev=48.69 00:35:23.191 lat (msec): min=87, max=438, avg=257.82, stdev=48.69 00:35:23.191 clat percentiles (msec): 00:35:23.191 | 1.00th=[ 88], 5.00th=[ 110], 10.00th=[ 199], 20.00th=[ 251], 00:35:23.191 | 30.00th=[ 262], 40.00th=[ 266], 50.00th=[ 268], 60.00th=[ 271], 00:35:23.191 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 284], 95.00th=[ 313], 00:35:23.191 | 99.00th=[ 414], 99.50th=[ 439], 99.90th=[ 439], 99.95th=[ 439], 00:35:23.191 | 99.99th=[ 439] 00:35:23.191 bw ( KiB/s): min= 175, max= 384, per=4.51%, avg=245.90, stdev=41.11, samples=20 00:35:23.191 iops : min= 43, max= 96, avg=61.10, stdev=10.37, samples=20 00:35:23.191 lat (msec) : 100=2.56%, 250=17.89%, 500=79.55% 00:35:23.191 cpu : usr=98.52%, sys=1.08%, ctx=12, majf=0, minf=23 00:35:23.191 IO depths : 1=1.6%, 2=4.2%, 4=13.7%, 8=69.5%, 16=11.0%, 32=0.0%, >=64=0.0% 00:35:23.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.191 complete : 0=0.0%, 4=90.8%, 8=3.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.191 issued rwts: total=626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.191 filename1: (groupid=0, jobs=1): err= 0: pid=357418: Wed Dec 11 10:13:31 2024 00:35:23.191 read: IOPS=41, BW=165KiB/s (169kB/s)(1664KiB/10103msec) 00:35:23.191 slat (nsec): min=7550, max=63846, avg=12597.70, stdev=8828.65 00:35:23.191 clat (msec): min=223, max=649, avg=388.46, stdev=65.51 00:35:23.191 lat (msec): min=224, max=649, avg=388.47, stdev=65.51 00:35:23.191 clat percentiles (msec): 00:35:23.191 | 1.00th=[ 253], 5.00th=[ 266], 10.00th=[ 268], 20.00th=[ 359], 00:35:23.191 | 30.00th=[ 372], 40.00th=[ 380], 50.00th=[ 393], 60.00th=[ 401], 00:35:23.191 | 70.00th=[ 409], 80.00th=[ 418], 90.00th=[ 451], 95.00th=[ 502], 00:35:23.191 | 99.00th=[ 550], 99.50th=[ 550], 99.90th=[ 651], 99.95th=[ 651], 00:35:23.191 | 99.99th=[ 651] 00:35:23.191 bw ( KiB/s): min= 111, max= 256, per=3.07%, avg=167.74, stdev=58.06, samples=19 00:35:23.191 iops : min= 27, max= 64, avg=41.47, stdev=14.50, samples=19 00:35:23.191 lat (msec) : 250=0.96%, 500=90.87%, 750=8.17% 00:35:23.191 cpu : usr=98.68%, sys=0.94%, ctx=12, majf=0, minf=22 00:35:23.191 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:35:23.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.191 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.191 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.191 filename1: (groupid=0, jobs=1): err= 0: pid=357419: Wed Dec 11 10:13:31 2024 00:35:23.191 read: IOPS=57, BW=230KiB/s (236kB/s)(2328KiB/10116msec) 00:35:23.191 slat (nsec): min=5473, max=74697, avg=12769.57, stdev=11563.60 00:35:23.191 clat (msec): min=124, max=453, avg=277.74, stdev=48.45 00:35:23.191 lat (msec): min=124, max=453, avg=277.75, stdev=48.45 00:35:23.191 clat percentiles (msec): 00:35:23.191 | 1.00th=[ 125], 5.00th=[ 245], 10.00th=[ 253], 20.00th=[ 262], 00:35:23.191 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:35:23.191 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 334], 95.00th=[ 393], 00:35:23.191 | 99.00th=[ 456], 99.50th=[ 456], 99.90th=[ 456], 99.95th=[ 456], 00:35:23.191 | 99.99th=[ 456] 00:35:23.191 bw ( KiB/s): min= 128, max= 256, per=4.16%, avg=226.10, stdev=43.11, samples=20 00:35:23.191 iops : min= 32, max= 64, avg=56.30, stdev=10.71, samples=20 00:35:23.191 lat (msec) : 250=6.87%, 500=93.13% 00:35:23.191 cpu : usr=98.48%, sys=1.14%, ctx=14, majf=0, minf=37 00:35:23.191 IO depths : 1=1.0%, 2=3.3%, 4=12.9%, 8=71.3%, 16=11.5%, 32=0.0%, >=64=0.0% 00:35:23.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.191 complete : 0=0.0%, 4=90.6%, 8=3.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.191 issued rwts: total=582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.191 filename1: (groupid=0, jobs=1): err= 0: pid=357420: Wed Dec 11 10:13:31 2024 00:35:23.191 read: IOPS=56, BW=228KiB/s (233kB/s)(2304KiB/10125msec) 00:35:23.191 slat (nsec): min=6222, max=75251, avg=19199.39, stdev=16724.27 00:35:23.191 clat (msec): min=180, max=517, avg=280.78, stdev=48.11 00:35:23.191 lat (msec): min=180, max=517, avg=280.80, stdev=48.11 00:35:23.191 clat percentiles (msec): 00:35:23.191 | 1.00th=[ 186], 5.00th=[ 215], 10.00th=[ 243], 20.00th=[ 264], 00:35:23.191 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:35:23.191 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 376], 95.00th=[ 388], 00:35:23.191 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 518], 99.95th=[ 518], 00:35:23.191 | 99.99th=[ 518] 00:35:23.191 bw ( KiB/s): min= 127, max= 368, per=4.10%, avg=223.50, stdev=63.39, samples=20 00:35:23.191 iops : min= 31, max= 92, avg=55.50, stdev=16.03, samples=20 00:35:23.192 lat (msec) : 250=11.11%, 500=88.54%, 750=0.35% 00:35:23.192 cpu : usr=98.49%, sys=1.13%, ctx=13, majf=0, minf=23 00:35:23.192 IO depths : 1=1.7%, 2=8.0%, 4=25.0%, 8=54.5%, 16=10.8%, 32=0.0%, >=64=0.0% 00:35:23.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.192 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.192 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.192 filename1: (groupid=0, jobs=1): err= 0: pid=357421: Wed Dec 11 10:13:31 2024 00:35:23.192 read: IOPS=56, BW=227KiB/s (233kB/s)(2304KiB/10143msec) 00:35:23.192 slat (nsec): min=7502, max=31614, avg=9725.32, stdev=3150.39 00:35:23.192 clat (msec): min=183, max=443, avg=280.36, stdev=46.87 00:35:23.192 lat (msec): min=183, max=443, avg=280.37, stdev=46.87 00:35:23.192 clat percentiles (msec): 00:35:23.192 | 1.00th=[ 215], 5.00th=[ 226], 10.00th=[ 245], 20.00th=[ 255], 00:35:23.192 | 30.00th=[ 262], 40.00th=[ 266], 50.00th=[ 268], 60.00th=[ 271], 00:35:23.192 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 372], 95.00th=[ 409], 00:35:23.192 | 99.00th=[ 435], 99.50th=[ 443], 99.90th=[ 443], 99.95th=[ 443], 00:35:23.192 | 99.99th=[ 443] 00:35:23.192 bw ( KiB/s): min= 175, max= 256, per=4.10%, avg=223.50, stdev=32.40, samples=20 00:35:23.192 iops : min= 43, max= 64, avg=55.50, stdev= 8.09, samples=20 00:35:23.192 lat (msec) : 250=11.46%, 500=88.54% 00:35:23.192 cpu : usr=98.73%, sys=0.89%, ctx=13, majf=0, minf=34 00:35:23.192 IO depths : 1=0.5%, 2=1.2%, 4=7.3%, 8=78.3%, 16=12.7%, 32=0.0%, >=64=0.0% 00:35:23.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.192 complete : 0=0.0%, 4=88.8%, 8=6.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.192 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.192 filename1: (groupid=0, jobs=1): err= 0: pid=357423: Wed Dec 11 10:13:31 2024 00:35:23.192 read: IOPS=58, BW=235KiB/s (240kB/s)(2376KiB/10121msec) 00:35:23.192 slat (nsec): min=4511, max=30081, avg=9109.10, stdev=2360.24 00:35:23.192 clat (msec): min=136, max=628, avg=272.16, stdev=52.48 00:35:23.192 lat (msec): min=136, max=628, avg=272.17, stdev=52.48 00:35:23.192 clat percentiles (msec): 00:35:23.192 | 1.00th=[ 136], 5.00th=[ 226], 10.00th=[ 245], 20.00th=[ 259], 00:35:23.192 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 268], 60.00th=[ 271], 00:35:23.192 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 351], 00:35:23.192 | 99.00th=[ 506], 99.50th=[ 506], 99.90th=[ 625], 99.95th=[ 625], 00:35:23.192 | 99.99th=[ 625] 00:35:23.192 bw ( KiB/s): min= 175, max= 304, per=4.45%, avg=242.95, stdev=34.02, samples=19 00:35:23.192 iops : min= 43, max= 76, avg=60.42, stdev= 8.55, samples=19 00:35:23.192 lat (msec) : 250=10.77%, 500=86.53%, 750=2.69% 00:35:23.192 cpu : usr=98.52%, sys=1.10%, ctx=11, majf=0, minf=23 00:35:23.192 IO depths : 1=0.2%, 2=0.5%, 4=7.1%, 8=79.8%, 16=12.5%, 32=0.0%, >=64=0.0% 00:35:23.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.192 complete : 0=0.0%, 4=88.9%, 8=5.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.192 issued rwts: total=594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.192 filename1: (groupid=0, jobs=1): err= 0: pid=357424: Wed Dec 11 10:13:31 2024 00:35:23.192 read: IOPS=58, BW=235KiB/s (240kB/s)(2376KiB/10132msec) 00:35:23.192 slat (nsec): min=7526, max=75374, avg=13100.55, stdev=9717.98 00:35:23.192 clat (msec): min=160, max=424, avg=272.53, stdev=38.75 00:35:23.192 lat (msec): min=160, max=424, avg=272.54, stdev=38.75 00:35:23.192 clat percentiles (msec): 00:35:23.192 | 1.00th=[ 186], 5.00th=[ 213], 10.00th=[ 239], 20.00th=[ 262], 00:35:23.192 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:35:23.192 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 334], 95.00th=[ 359], 00:35:23.192 | 99.00th=[ 418], 99.50th=[ 426], 99.90th=[ 426], 99.95th=[ 426], 00:35:23.192 | 99.99th=[ 426] 00:35:23.192 bw ( KiB/s): min= 127, max= 368, per=4.23%, avg=230.75, stdev=60.95, samples=20 00:35:23.192 iops : min= 31, max= 92, avg=57.35, stdev=15.31, samples=20 00:35:23.192 lat (msec) : 250=13.47%, 500=86.53% 00:35:23.192 cpu : usr=98.48%, sys=1.14%, ctx=17, majf=0, minf=21 00:35:23.192 IO depths : 1=1.3%, 2=5.1%, 4=17.2%, 8=65.2%, 16=11.3%, 32=0.0%, >=64=0.0% 00:35:23.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.192 complete : 0=0.0%, 4=92.0%, 8=2.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.192 issued rwts: total=594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.192 filename1: (groupid=0, jobs=1): err= 0: pid=357425: Wed Dec 11 10:13:31 2024 00:35:23.192 read: IOPS=64, BW=258KiB/s (265kB/s)(2624KiB/10152msec) 00:35:23.192 slat (nsec): min=7536, max=33742, avg=9556.54, stdev=3036.25 00:35:23.192 clat (msec): min=61, max=275, avg=247.23, stdev=48.70 00:35:23.192 lat (msec): min=61, max=275, avg=247.24, stdev=48.70 00:35:23.192 clat percentiles (msec): 00:35:23.192 | 1.00th=[ 63], 5.00th=[ 116], 10.00th=[ 199], 20.00th=[ 245], 00:35:23.192 | 30.00th=[ 255], 40.00th=[ 266], 50.00th=[ 268], 60.00th=[ 271], 00:35:23.192 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 275], 00:35:23.192 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:35:23.192 | 99.99th=[ 275] 00:35:23.192 bw ( KiB/s): min= 143, max= 512, per=4.69%, avg=255.50, stdev=69.37, samples=20 00:35:23.192 iops : min= 35, max= 128, avg=63.50, stdev=17.43, samples=20 00:35:23.192 lat (msec) : 100=4.88%, 250=18.60%, 500=76.52% 00:35:23.192 cpu : usr=98.38%, sys=1.20%, ctx=62, majf=0, minf=20 00:35:23.192 IO depths : 1=0.9%, 2=7.2%, 4=25.0%, 8=55.3%, 16=11.6%, 32=0.0%, >=64=0.0% 00:35:23.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.192 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.192 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.192 filename2: (groupid=0, jobs=1): err= 0: pid=357426: Wed Dec 11 10:13:31 2024 00:35:23.192 read: IOPS=58, BW=233KiB/s (239kB/s)(2360KiB/10121msec) 00:35:23.192 slat (nsec): min=6091, max=75572, avg=17506.98, stdev=15881.02 00:35:23.192 clat (msec): min=184, max=397, avg=273.95, stdev=38.58 00:35:23.192 lat (msec): min=184, max=397, avg=273.97, stdev=38.59 00:35:23.192 clat percentiles (msec): 00:35:23.192 | 1.00th=[ 186], 5.00th=[ 228], 10.00th=[ 245], 20.00th=[ 255], 00:35:23.192 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:35:23.192 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 321], 95.00th=[ 380], 00:35:23.192 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:35:23.192 | 99.99th=[ 397] 00:35:23.192 bw ( KiB/s): min= 128, max= 368, per=4.21%, avg=229.20, stdev=59.25, samples=20 00:35:23.192 iops : min= 32, max= 92, avg=57.00, stdev=14.99, samples=20 00:35:23.192 lat (msec) : 250=12.88%, 500=87.12% 00:35:23.192 cpu : usr=98.51%, sys=1.11%, ctx=11, majf=0, minf=25 00:35:23.192 IO depths : 1=1.2%, 2=7.5%, 4=25.1%, 8=55.1%, 16=11.2%, 32=0.0%, >=64=0.0% 00:35:23.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.192 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.192 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.192 filename2: (groupid=0, jobs=1): err= 0: pid=357427: Wed Dec 11 10:13:31 2024 00:35:23.192 read: IOPS=58, BW=232KiB/s (238kB/s)(2352KiB/10117msec) 00:35:23.192 slat (nsec): min=4260, max=71304, avg=9953.83, stdev=6354.58 00:35:23.192 clat (msec): min=138, max=502, avg=274.77, stdev=51.68 00:35:23.192 lat (msec): min=138, max=502, avg=274.78, stdev=51.68 00:35:23.192 clat percentiles (msec): 00:35:23.192 | 1.00th=[ 140], 5.00th=[ 215], 10.00th=[ 243], 20.00th=[ 259], 00:35:23.192 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 268], 60.00th=[ 271], 00:35:23.192 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 300], 95.00th=[ 380], 00:35:23.192 | 99.00th=[ 502], 99.50th=[ 502], 99.90th=[ 502], 99.95th=[ 502], 00:35:23.192 | 99.99th=[ 502] 00:35:23.192 bw ( KiB/s): min= 143, max= 368, per=4.42%, avg=240.53, stdev=46.82, samples=19 00:35:23.192 iops : min= 35, max= 92, avg=59.89, stdev=11.85, samples=19 00:35:23.192 lat (msec) : 250=12.93%, 500=84.35%, 750=2.72% 00:35:23.192 cpu : usr=98.48%, sys=1.13%, ctx=11, majf=0, minf=23 00:35:23.192 IO depths : 1=0.7%, 2=1.7%, 4=9.2%, 8=76.5%, 16=11.9%, 32=0.0%, >=64=0.0% 00:35:23.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.192 complete : 0=0.0%, 4=89.5%, 8=5.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.192 issued rwts: total=588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.192 filename2: (groupid=0, jobs=1): err= 0: pid=357428: Wed Dec 11 10:13:31 2024 00:35:23.192 read: IOPS=58, BW=236KiB/s (241kB/s)(2392KiB/10152msec) 00:35:23.192 slat (nsec): min=7770, max=40766, avg=17656.94, stdev=4385.15 00:35:23.192 clat (msec): min=88, max=441, avg=271.21, stdev=57.83 00:35:23.192 lat (msec): min=88, max=441, avg=271.22, stdev=57.83 00:35:23.192 clat percentiles (msec): 00:35:23.192 | 1.00th=[ 89], 5.00th=[ 110], 10.00th=[ 243], 20.00th=[ 262], 00:35:23.193 | 30.00th=[ 264], 40.00th=[ 266], 50.00th=[ 271], 60.00th=[ 271], 00:35:23.193 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 359], 95.00th=[ 384], 00:35:23.193 | 99.00th=[ 439], 99.50th=[ 443], 99.90th=[ 443], 99.95th=[ 443], 00:35:23.193 | 99.99th=[ 443] 00:35:23.193 bw ( KiB/s): min= 143, max= 384, per=4.27%, avg=232.30, stdev=50.00, samples=20 00:35:23.193 iops : min= 35, max= 96, avg=57.70, stdev=12.57, samples=20 00:35:23.193 lat (msec) : 100=2.68%, 250=10.03%, 500=87.29% 00:35:23.193 cpu : usr=98.53%, sys=1.06%, ctx=15, majf=0, minf=19 00:35:23.193 IO depths : 1=1.0%, 2=3.5%, 4=13.2%, 8=70.4%, 16=11.9%, 32=0.0%, >=64=0.0% 00:35:23.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.193 complete : 0=0.0%, 4=90.7%, 8=4.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.193 issued rwts: total=598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.193 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.193 filename2: (groupid=0, jobs=1): err= 0: pid=357429: Wed Dec 11 10:13:31 2024 00:35:23.193 read: IOPS=62, BW=252KiB/s (258kB/s)(2560KiB/10167msec) 00:35:23.193 slat (nsec): min=7538, max=32809, avg=9538.14, stdev=2452.74 00:35:23.193 clat (msec): min=49, max=407, avg=253.60, stdev=62.68 00:35:23.193 lat (msec): min=49, max=407, avg=253.61, stdev=62.68 00:35:23.193 clat percentiles (msec): 00:35:23.193 | 1.00th=[ 50], 5.00th=[ 101], 10.00th=[ 169], 20.00th=[ 230], 00:35:23.193 | 30.00th=[ 257], 40.00th=[ 266], 50.00th=[ 268], 60.00th=[ 271], 00:35:23.193 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 300], 95.00th=[ 355], 00:35:23.193 | 99.00th=[ 405], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:35:23.193 | 99.99th=[ 409] 00:35:23.193 bw ( KiB/s): min= 175, max= 384, per=4.58%, avg=249.20, stdev=50.64, samples=20 00:35:23.193 iops : min= 43, max= 96, avg=62.00, stdev=12.81, samples=20 00:35:23.193 lat (msec) : 50=2.19%, 100=2.19%, 250=20.31%, 500=75.31% 00:35:23.193 cpu : usr=98.68%, sys=0.90%, ctx=12, majf=0, minf=32 00:35:23.193 IO depths : 1=0.2%, 2=0.8%, 4=7.7%, 8=78.8%, 16=12.7%, 32=0.0%, >=64=0.0% 00:35:23.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.193 complete : 0=0.0%, 4=89.1%, 8=5.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.193 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.193 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.193 filename2: (groupid=0, jobs=1): err= 0: pid=357430: Wed Dec 11 10:13:31 2024 00:35:23.193 read: IOPS=41, BW=164KiB/s (168kB/s)(1664KiB/10116msec) 00:35:23.193 slat (nsec): min=5022, max=64255, avg=11334.29, stdev=8648.91 00:35:23.193 clat (msec): min=254, max=546, avg=388.95, stdev=61.50 00:35:23.193 lat (msec): min=254, max=546, avg=388.96, stdev=61.50 00:35:23.193 clat percentiles (msec): 00:35:23.193 | 1.00th=[ 255], 5.00th=[ 266], 10.00th=[ 271], 20.00th=[ 363], 00:35:23.193 | 30.00th=[ 372], 40.00th=[ 376], 50.00th=[ 388], 60.00th=[ 397], 00:35:23.193 | 70.00th=[ 418], 80.00th=[ 430], 90.00th=[ 451], 95.00th=[ 502], 00:35:23.193 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 550], 99.95th=[ 550], 00:35:23.193 | 99.99th=[ 550] 00:35:23.193 bw ( KiB/s): min= 127, max= 256, per=3.09%, avg=168.11, stdev=57.89, samples=19 00:35:23.193 iops : min= 31, max= 64, avg=41.79, stdev=14.56, samples=19 00:35:23.193 lat (msec) : 500=92.31%, 750=7.69% 00:35:23.193 cpu : usr=98.46%, sys=1.15%, ctx=10, majf=0, minf=24 00:35:23.193 IO depths : 1=3.8%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:35:23.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.193 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.193 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.193 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.193 filename2: (groupid=0, jobs=1): err= 0: pid=357431: Wed Dec 11 10:13:31 2024 00:35:23.193 read: IOPS=39, BW=160KiB/s (164kB/s)(1600KiB/10009msec) 00:35:23.193 slat (nsec): min=7497, max=28257, avg=9903.74, stdev=3263.99 00:35:23.193 clat (msec): min=247, max=540, avg=400.26, stdev=55.28 00:35:23.193 lat (msec): min=247, max=540, avg=400.27, stdev=55.28 00:35:23.193 clat percentiles (msec): 00:35:23.193 | 1.00th=[ 257], 5.00th=[ 271], 10.00th=[ 359], 20.00th=[ 368], 00:35:23.193 | 30.00th=[ 372], 40.00th=[ 384], 50.00th=[ 397], 60.00th=[ 409], 00:35:23.193 | 70.00th=[ 422], 80.00th=[ 426], 90.00th=[ 443], 95.00th=[ 502], 00:35:23.193 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 542], 99.95th=[ 542], 00:35:23.193 | 99.99th=[ 542] 00:35:23.193 bw ( KiB/s): min= 112, max= 256, per=3.00%, avg=163.22, stdev=53.72, samples=18 00:35:23.193 iops : min= 28, max= 64, avg=40.56, stdev=13.50, samples=18 00:35:23.193 lat (msec) : 250=0.50%, 500=90.00%, 750=9.50% 00:35:23.193 cpu : usr=98.74%, sys=0.87%, ctx=13, majf=0, minf=24 00:35:23.193 IO depths : 1=3.5%, 2=9.8%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:35:23.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.193 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.193 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.193 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.193 filename2: (groupid=0, jobs=1): err= 0: pid=357433: Wed Dec 11 10:13:31 2024 00:35:23.193 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10122msec) 00:35:23.193 slat (nsec): min=7558, max=75486, avg=19446.81, stdev=15689.84 00:35:23.193 clat (msec): min=149, max=385, avg=273.11, stdev=37.91 00:35:23.193 lat (msec): min=149, max=385, avg=273.13, stdev=37.91 00:35:23.193 clat percentiles (msec): 00:35:23.193 | 1.00th=[ 186], 5.00th=[ 215], 10.00th=[ 243], 20.00th=[ 259], 00:35:23.193 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:35:23.193 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 321], 95.00th=[ 376], 00:35:23.193 | 99.00th=[ 384], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 384], 00:35:23.193 | 99.99th=[ 384] 00:35:23.193 bw ( KiB/s): min= 128, max= 368, per=4.23%, avg=230.00, stdev=59.41, samples=20 00:35:23.193 iops : min= 32, max= 92, avg=57.20, stdev=14.96, samples=20 00:35:23.193 lat (msec) : 250=13.18%, 500=86.82% 00:35:23.193 cpu : usr=98.48%, sys=1.13%, ctx=14, majf=0, minf=21 00:35:23.193 IO depths : 1=0.8%, 2=7.1%, 4=25.0%, 8=55.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:35:23.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.193 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.193 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.193 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.193 filename2: (groupid=0, jobs=1): err= 0: pid=357434: Wed Dec 11 10:13:31 2024 00:35:23.193 read: IOPS=54, BW=219KiB/s (224kB/s)(2216KiB/10116msec) 00:35:23.193 slat (nsec): min=7520, max=52925, avg=11327.07, stdev=5970.05 00:35:23.193 clat (msec): min=121, max=502, avg=291.84, stdev=63.79 00:35:23.193 lat (msec): min=121, max=502, avg=291.85, stdev=63.79 00:35:23.193 clat percentiles (msec): 00:35:23.193 | 1.00th=[ 123], 5.00th=[ 247], 10.00th=[ 255], 20.00th=[ 262], 00:35:23.193 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 275], 00:35:23.193 | 70.00th=[ 275], 80.00th=[ 359], 90.00th=[ 388], 95.00th=[ 426], 00:35:23.193 | 99.00th=[ 502], 99.50th=[ 502], 99.90th=[ 502], 99.95th=[ 502], 00:35:23.193 | 99.99th=[ 502] 00:35:23.193 bw ( KiB/s): min= 143, max= 256, per=4.16%, avg=226.21, stdev=39.76, samples=19 00:35:23.193 iops : min= 35, max= 64, avg=56.32, stdev=10.03, samples=19 00:35:23.193 lat (msec) : 250=5.78%, 500=91.34%, 750=2.89% 00:35:23.193 cpu : usr=98.42%, sys=1.19%, ctx=15, majf=0, minf=29 00:35:23.193 IO depths : 1=0.9%, 2=2.9%, 4=11.7%, 8=72.6%, 16=11.9%, 32=0.0%, >=64=0.0% 00:35:23.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.193 complete : 0=0.0%, 4=90.2%, 8=4.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.193 issued rwts: total=554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.193 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:23.193 00:35:23.193 Run status group 0 (all jobs): 00:35:23.193 READ: bw=5435KiB/s (5565kB/s), 160KiB/s-258KiB/s (164kB/s-265kB/s), io=54.0MiB (56.6MB), run=10009-10167msec 00:35:23.193 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:23.193 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:23.193 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:23.193 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:23.193 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:23.193 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:23.193 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.193 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.193 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.193 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:23.193 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.193 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.193 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.193 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:23.193 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:23.193 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.194 bdev_null0 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.194 [2024-12-11 10:13:31.732478] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.194 bdev_null1 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:23.194 { 00:35:23.194 "params": { 00:35:23.194 "name": "Nvme$subsystem", 00:35:23.194 "trtype": "$TEST_TRANSPORT", 00:35:23.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:23.194 "adrfam": "ipv4", 00:35:23.194 "trsvcid": "$NVMF_PORT", 00:35:23.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:23.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:23.194 "hdgst": ${hdgst:-false}, 00:35:23.194 "ddgst": ${ddgst:-false} 00:35:23.194 }, 00:35:23.194 "method": "bdev_nvme_attach_controller" 00:35:23.194 } 00:35:23.194 EOF 00:35:23.194 )") 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:23.194 10:13:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:23.194 { 00:35:23.194 "params": { 00:35:23.194 "name": "Nvme$subsystem", 00:35:23.194 "trtype": "$TEST_TRANSPORT", 00:35:23.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:23.194 "adrfam": "ipv4", 00:35:23.195 "trsvcid": "$NVMF_PORT", 00:35:23.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:23.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:23.195 "hdgst": ${hdgst:-false}, 00:35:23.195 "ddgst": ${ddgst:-false} 00:35:23.195 }, 00:35:23.195 "method": "bdev_nvme_attach_controller" 00:35:23.195 } 00:35:23.195 EOF 00:35:23.195 )") 00:35:23.195 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:23.195 10:13:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:23.195 10:13:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:23.195 10:13:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:23.195 10:13:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:23.195 10:13:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:23.195 "params": { 00:35:23.195 "name": "Nvme0", 00:35:23.195 "trtype": "tcp", 00:35:23.195 "traddr": "10.0.0.2", 00:35:23.195 "adrfam": "ipv4", 00:35:23.195 "trsvcid": "4420", 00:35:23.195 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:23.195 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:23.195 "hdgst": false, 00:35:23.195 "ddgst": false 00:35:23.195 }, 00:35:23.195 "method": "bdev_nvme_attach_controller" 00:35:23.195 },{ 00:35:23.195 "params": { 00:35:23.195 "name": "Nvme1", 00:35:23.195 "trtype": "tcp", 00:35:23.195 "traddr": "10.0.0.2", 00:35:23.195 "adrfam": "ipv4", 00:35:23.195 "trsvcid": "4420", 00:35:23.195 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:23.195 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:23.195 "hdgst": false, 00:35:23.195 "ddgst": false 00:35:23.195 }, 00:35:23.195 "method": "bdev_nvme_attach_controller" 00:35:23.195 }' 00:35:23.195 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:23.195 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:23.195 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:23.195 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:23.195 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:23.195 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:23.195 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:23.195 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:23.195 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:23.195 10:13:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:23.195 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:23.195 ... 00:35:23.195 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:23.195 ... 00:35:23.195 fio-3.35 00:35:23.195 Starting 4 threads 00:35:28.468 00:35:28.468 filename0: (groupid=0, jobs=1): err= 0: pid=359430: Wed Dec 11 10:13:37 2024 00:35:28.468 read: IOPS=2767, BW=21.6MiB/s (22.7MB/s)(108MiB/5005msec) 00:35:28.468 slat (nsec): min=6084, max=44725, avg=8833.33, stdev=3267.41 00:35:28.468 clat (usec): min=593, max=5552, avg=2862.53, stdev=405.95 00:35:28.468 lat (usec): min=605, max=5558, avg=2871.36, stdev=405.79 00:35:28.468 clat percentiles (usec): 00:35:28.468 | 1.00th=[ 1778], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2540], 00:35:28.468 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 2933], 60.00th=[ 2966], 00:35:28.468 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3261], 95.00th=[ 3490], 00:35:28.468 | 99.00th=[ 4080], 99.50th=[ 4359], 99.90th=[ 4883], 99.95th=[ 5211], 00:35:28.468 | 99.99th=[ 5538] 00:35:28.468 bw ( KiB/s): min=21520, max=23408, per=26.05%, avg=22150.40, stdev=630.97, samples=10 00:35:28.468 iops : min= 2690, max= 2926, avg=2768.80, stdev=78.87, samples=10 00:35:28.468 lat (usec) : 750=0.01%, 1000=0.01% 00:35:28.468 lat (msec) : 2=2.32%, 4=96.37%, 10=1.29% 00:35:28.468 cpu : usr=96.16%, sys=3.52%, ctx=8, majf=0, minf=9 00:35:28.468 IO depths : 1=0.6%, 2=5.8%, 4=66.7%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:28.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.468 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.468 issued rwts: total=13852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:28.468 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:28.468 filename0: (groupid=0, jobs=1): err= 0: pid=359431: Wed Dec 11 10:13:37 2024 00:35:28.468 read: IOPS=2617, BW=20.5MiB/s (21.4MB/s)(102MiB/5001msec) 00:35:28.468 slat (nsec): min=6164, max=43590, avg=8843.19, stdev=3031.69 00:35:28.468 clat (usec): min=718, max=5471, avg=3030.36, stdev=452.12 00:35:28.468 lat (usec): min=724, max=5477, avg=3039.21, stdev=452.05 00:35:28.468 clat percentiles (usec): 00:35:28.468 | 1.00th=[ 2057], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2769], 00:35:28.468 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:35:28.468 | 70.00th=[ 3064], 80.00th=[ 3228], 90.00th=[ 3556], 95.00th=[ 3884], 00:35:28.468 | 99.00th=[ 4686], 99.50th=[ 4948], 99.90th=[ 5407], 99.95th=[ 5407], 00:35:28.468 | 99.99th=[ 5473] 00:35:28.468 bw ( KiB/s): min=19616, max=21568, per=24.63%, avg=20935.11, stdev=603.94, samples=9 00:35:28.468 iops : min= 2452, max= 2696, avg=2616.89, stdev=75.49, samples=9 00:35:28.468 lat (usec) : 750=0.02%, 1000=0.01% 00:35:28.468 lat (msec) : 2=0.73%, 4=95.23%, 10=4.03% 00:35:28.468 cpu : usr=95.76%, sys=3.90%, ctx=6, majf=0, minf=9 00:35:28.468 IO depths : 1=0.1%, 2=2.9%, 4=68.6%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:28.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.468 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.468 issued rwts: total=13092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:28.468 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:28.468 filename1: (groupid=0, jobs=1): err= 0: pid=359432: Wed Dec 11 10:13:37 2024 00:35:28.468 read: IOPS=2641, BW=20.6MiB/s (21.6MB/s)(103MiB/5003msec) 00:35:28.468 slat (nsec): min=6169, max=36945, avg=8759.34, stdev=2950.63 00:35:28.468 clat (usec): min=841, max=5497, avg=3003.15, stdev=413.46 00:35:28.468 lat (usec): min=854, max=5503, avg=3011.91, stdev=413.30 00:35:28.468 clat percentiles (usec): 00:35:28.468 | 1.00th=[ 2040], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2769], 00:35:28.468 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:35:28.468 | 70.00th=[ 3064], 80.00th=[ 3228], 90.00th=[ 3458], 95.00th=[ 3752], 00:35:28.468 | 99.00th=[ 4359], 99.50th=[ 4621], 99.90th=[ 5080], 99.95th=[ 5211], 00:35:28.468 | 99.99th=[ 5473] 00:35:28.468 bw ( KiB/s): min=20320, max=21552, per=24.83%, avg=21111.11, stdev=378.38, samples=9 00:35:28.468 iops : min= 2540, max= 2694, avg=2638.89, stdev=47.30, samples=9 00:35:28.468 lat (usec) : 1000=0.01% 00:35:28.468 lat (msec) : 2=0.81%, 4=96.17%, 10=3.01% 00:35:28.468 cpu : usr=95.86%, sys=3.82%, ctx=8, majf=0, minf=9 00:35:28.468 IO depths : 1=0.3%, 2=3.2%, 4=68.7%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:28.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.468 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.468 issued rwts: total=13213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:28.468 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:28.468 filename1: (groupid=0, jobs=1): err= 0: pid=359433: Wed Dec 11 10:13:37 2024 00:35:28.468 read: IOPS=2605, BW=20.4MiB/s (21.3MB/s)(102MiB/5001msec) 00:35:28.468 slat (nsec): min=6165, max=48837, avg=8548.39, stdev=2895.06 00:35:28.468 clat (usec): min=661, max=5568, avg=3045.93, stdev=466.54 00:35:28.468 lat (usec): min=670, max=5575, avg=3054.47, stdev=466.43 00:35:28.468 clat percentiles (usec): 00:35:28.468 | 1.00th=[ 2089], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2769], 00:35:28.468 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:35:28.468 | 70.00th=[ 3097], 80.00th=[ 3261], 90.00th=[ 3621], 95.00th=[ 3916], 00:35:28.468 | 99.00th=[ 4752], 99.50th=[ 5014], 99.90th=[ 5211], 99.95th=[ 5276], 00:35:28.468 | 99.99th=[ 5538] 00:35:28.468 bw ( KiB/s): min=19984, max=21424, per=24.44%, avg=20774.22, stdev=528.33, samples=9 00:35:28.468 iops : min= 2498, max= 2678, avg=2596.78, stdev=66.04, samples=9 00:35:28.468 lat (usec) : 750=0.01%, 1000=0.08% 00:35:28.468 lat (msec) : 2=0.59%, 4=95.26%, 10=4.06% 00:35:28.468 cpu : usr=95.72%, sys=3.98%, ctx=6, majf=0, minf=9 00:35:28.468 IO depths : 1=0.1%, 2=3.1%, 4=68.5%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:28.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.468 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.468 issued rwts: total=13029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:28.468 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:28.468 00:35:28.468 Run status group 0 (all jobs): 00:35:28.468 READ: bw=83.0MiB/s (87.1MB/s), 20.4MiB/s-21.6MiB/s (21.3MB/s-22.7MB/s), io=416MiB (436MB), run=5001-5005msec 00:35:28.468 10:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:28.468 10:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:28.468 10:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:28.468 10:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:28.468 10:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:28.468 10:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:28.468 10:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.468 10:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.468 10:13:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.468 10:13:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:28.468 10:13:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.468 10:13:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.468 10:13:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.468 10:13:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:28.468 10:13:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:28.468 10:13:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:28.468 10:13:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:28.468 10:13:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.468 10:13:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.468 10:13:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.468 10:13:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:28.468 10:13:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.468 10:13:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.468 10:13:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.468 00:35:28.468 real 0m24.733s 00:35:28.468 user 4m54.951s 00:35:28.468 sys 0m5.235s 00:35:28.468 10:13:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:28.468 10:13:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:28.468 ************************************ 00:35:28.468 END TEST fio_dif_rand_params 00:35:28.468 ************************************ 00:35:28.728 10:13:38 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:28.729 10:13:38 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:28.729 10:13:38 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:28.729 10:13:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:28.729 ************************************ 00:35:28.729 START TEST fio_dif_digest 00:35:28.729 ************************************ 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:28.729 bdev_null0 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:28.729 [2024-12-11 10:13:38.141101] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:28.729 { 00:35:28.729 "params": { 00:35:28.729 "name": "Nvme$subsystem", 00:35:28.729 "trtype": "$TEST_TRANSPORT", 00:35:28.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:28.729 "adrfam": "ipv4", 00:35:28.729 "trsvcid": "$NVMF_PORT", 00:35:28.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:28.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:28.729 "hdgst": ${hdgst:-false}, 00:35:28.729 "ddgst": ${ddgst:-false} 00:35:28.729 }, 00:35:28.729 "method": "bdev_nvme_attach_controller" 00:35:28.729 } 00:35:28.729 EOF 00:35:28.729 )") 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:28.729 "params": { 00:35:28.729 "name": "Nvme0", 00:35:28.729 "trtype": "tcp", 00:35:28.729 "traddr": "10.0.0.2", 00:35:28.729 "adrfam": "ipv4", 00:35:28.729 "trsvcid": "4420", 00:35:28.729 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:28.729 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:28.729 "hdgst": true, 00:35:28.729 "ddgst": true 00:35:28.729 }, 00:35:28.729 "method": "bdev_nvme_attach_controller" 00:35:28.729 }' 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:28.729 10:13:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:28.989 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:28.989 ... 00:35:28.989 fio-3.35 00:35:28.989 Starting 3 threads 00:35:41.202 00:35:41.202 filename0: (groupid=0, jobs=1): err= 0: pid=360561: Wed Dec 11 10:13:49 2024 00:35:41.202 read: IOPS=289, BW=36.2MiB/s (37.9MB/s)(364MiB/10047msec) 00:35:41.202 slat (nsec): min=6505, max=73978, avg=18234.17, stdev=6926.73 00:35:41.202 clat (usec): min=8016, max=50080, avg=10330.15, stdev=1222.38 00:35:41.202 lat (usec): min=8028, max=50089, avg=10348.39, stdev=1222.21 00:35:41.202 clat percentiles (usec): 00:35:41.202 | 1.00th=[ 8717], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:35:41.202 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:35:41.202 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:35:41.202 | 99.00th=[11994], 99.50th=[12256], 99.90th=[12911], 99.95th=[47973], 00:35:41.202 | 99.99th=[50070] 00:35:41.202 bw ( KiB/s): min=36096, max=38144, per=35.20%, avg=37171.20, stdev=535.70, samples=20 00:35:41.202 iops : min= 282, max= 298, avg=290.50, stdev= 4.15, samples=20 00:35:41.202 lat (msec) : 10=31.95%, 20=67.98%, 50=0.03%, 100=0.03% 00:35:41.203 cpu : usr=96.27%, sys=3.39%, ctx=26, majf=0, minf=89 00:35:41.203 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:41.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:41.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:41.203 issued rwts: total=2908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:41.203 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:41.203 filename0: (groupid=0, jobs=1): err= 0: pid=360562: Wed Dec 11 10:13:49 2024 00:35:41.203 read: IOPS=267, BW=33.5MiB/s (35.1MB/s)(336MiB/10044msec) 00:35:41.203 slat (nsec): min=6480, max=38156, avg=16633.63, stdev=6409.41 00:35:41.203 clat (usec): min=8990, max=51174, avg=11175.02, stdev=1276.98 00:35:41.203 lat (usec): min=9004, max=51186, avg=11191.65, stdev=1276.79 00:35:41.203 clat percentiles (usec): 00:35:41.203 | 1.00th=[ 9503], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10552], 00:35:41.203 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:35:41.203 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:35:41.203 | 99.00th=[13042], 99.50th=[13304], 99.90th=[14353], 99.95th=[46400], 00:35:41.203 | 99.99th=[51119] 00:35:41.203 bw ( KiB/s): min=33280, max=35328, per=32.56%, avg=34380.80, stdev=581.99, samples=20 00:35:41.203 iops : min= 260, max= 276, avg=268.60, stdev= 4.55, samples=20 00:35:41.203 lat (msec) : 10=6.14%, 20=93.79%, 50=0.04%, 100=0.04% 00:35:41.203 cpu : usr=96.30%, sys=3.39%, ctx=16, majf=0, minf=72 00:35:41.203 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:41.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:41.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:41.203 issued rwts: total=2688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:41.203 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:41.203 filename0: (groupid=0, jobs=1): err= 0: pid=360563: Wed Dec 11 10:13:49 2024 00:35:41.203 read: IOPS=268, BW=33.5MiB/s (35.1MB/s)(337MiB/10044msec) 00:35:41.203 slat (nsec): min=6869, max=66361, avg=21398.38, stdev=6247.36 00:35:41.203 clat (usec): min=8126, max=47328, avg=11149.46, stdev=1201.77 00:35:41.203 lat (usec): min=8140, max=47353, avg=11170.85, stdev=1201.84 00:35:41.203 clat percentiles (usec): 00:35:41.203 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[10290], 20.00th=[10552], 00:35:41.203 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:35:41.203 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:35:41.203 | 99.00th=[13042], 99.50th=[13173], 99.90th=[14091], 99.95th=[45351], 00:35:41.203 | 99.99th=[47449] 00:35:41.203 bw ( KiB/s): min=33792, max=35072, per=32.62%, avg=34444.80, stdev=375.83, samples=20 00:35:41.203 iops : min= 264, max= 274, avg=269.10, stdev= 2.94, samples=20 00:35:41.203 lat (msec) : 10=4.98%, 20=94.95%, 50=0.07% 00:35:41.203 cpu : usr=95.63%, sys=3.70%, ctx=94, majf=0, minf=160 00:35:41.203 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:41.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:41.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:41.203 issued rwts: total=2693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:41.203 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:41.203 00:35:41.203 Run status group 0 (all jobs): 00:35:41.203 READ: bw=103MiB/s (108MB/s), 33.5MiB/s-36.2MiB/s (35.1MB/s-37.9MB/s), io=1036MiB (1086MB), run=10044-10047msec 00:35:41.203 10:13:49 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:41.203 10:13:49 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:41.203 10:13:49 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:41.203 10:13:49 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:41.203 10:13:49 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:41.203 10:13:49 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:41.203 10:13:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.203 10:13:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:41.203 10:13:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.203 10:13:49 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:41.203 10:13:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.203 10:13:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:41.203 10:13:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.203 00:35:41.203 real 0m11.239s 00:35:41.203 user 0m36.491s 00:35:41.203 sys 0m1.438s 00:35:41.203 10:13:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:41.203 10:13:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:41.203 ************************************ 00:35:41.203 END TEST fio_dif_digest 00:35:41.203 ************************************ 00:35:41.203 10:13:49 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:41.203 10:13:49 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:41.203 10:13:49 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:41.203 10:13:49 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:41.203 10:13:49 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:41.203 10:13:49 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:41.203 10:13:49 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:41.203 10:13:49 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:41.203 rmmod nvme_tcp 00:35:41.203 rmmod nvme_fabrics 00:35:41.203 rmmod nvme_keyring 00:35:41.203 10:13:49 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:41.203 10:13:49 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:41.203 10:13:49 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:41.203 10:13:49 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 352058 ']' 00:35:41.203 10:13:49 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 352058 00:35:41.203 10:13:49 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 352058 ']' 00:35:41.203 10:13:49 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 352058 00:35:41.203 10:13:49 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:35:41.203 10:13:49 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:41.203 10:13:49 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 352058 00:35:41.203 10:13:49 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:41.203 10:13:49 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:41.203 10:13:49 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 352058' 00:35:41.203 killing process with pid 352058 00:35:41.203 10:13:49 nvmf_dif -- common/autotest_common.sh@973 -- # kill 352058 00:35:41.203 10:13:49 nvmf_dif -- common/autotest_common.sh@978 -- # wait 352058 00:35:41.203 10:13:49 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:41.203 10:13:49 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:43.112 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:35:43.372 Waiting for block devices as requested 00:35:43.372 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:43.632 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:43.632 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:43.632 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:43.892 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:43.892 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:43.892 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:44.151 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:44.151 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:44.151 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:44.412 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:44.412 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:44.412 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:44.412 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:44.672 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:44.672 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:44.672 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:44.933 10:13:54 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:44.933 10:13:54 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:44.933 10:13:54 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:44.933 10:13:54 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:35:44.933 10:13:54 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:44.933 10:13:54 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:35:44.933 10:13:54 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:44.933 10:13:54 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:44.933 10:13:54 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:44.933 10:13:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:44.933 10:13:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:46.841 10:13:56 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:46.841 00:35:46.841 real 1m16.612s 00:35:46.841 user 7m14.585s 00:35:46.841 sys 0m21.352s 00:35:46.841 10:13:56 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:46.841 10:13:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:46.841 ************************************ 00:35:46.841 END TEST nvmf_dif 00:35:46.841 ************************************ 00:35:46.841 10:13:56 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:46.841 10:13:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:46.841 10:13:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:46.841 10:13:56 -- common/autotest_common.sh@10 -- # set +x 00:35:47.102 ************************************ 00:35:47.102 START TEST nvmf_abort_qd_sizes 00:35:47.102 ************************************ 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:47.102 * Looking for test storage... 00:35:47.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:47.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.102 --rc genhtml_branch_coverage=1 00:35:47.102 --rc genhtml_function_coverage=1 00:35:47.102 --rc genhtml_legend=1 00:35:47.102 --rc geninfo_all_blocks=1 00:35:47.102 --rc geninfo_unexecuted_blocks=1 00:35:47.102 00:35:47.102 ' 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:47.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.102 --rc genhtml_branch_coverage=1 00:35:47.102 --rc genhtml_function_coverage=1 00:35:47.102 --rc genhtml_legend=1 00:35:47.102 --rc geninfo_all_blocks=1 00:35:47.102 --rc geninfo_unexecuted_blocks=1 00:35:47.102 00:35:47.102 ' 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:47.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.102 --rc genhtml_branch_coverage=1 00:35:47.102 --rc genhtml_function_coverage=1 00:35:47.102 --rc genhtml_legend=1 00:35:47.102 --rc geninfo_all_blocks=1 00:35:47.102 --rc geninfo_unexecuted_blocks=1 00:35:47.102 00:35:47.102 ' 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:47.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.102 --rc genhtml_branch_coverage=1 00:35:47.102 --rc genhtml_function_coverage=1 00:35:47.102 --rc genhtml_legend=1 00:35:47.102 --rc geninfo_all_blocks=1 00:35:47.102 --rc geninfo_unexecuted_blocks=1 00:35:47.102 00:35:47.102 ' 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:47.102 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:47.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:47.103 10:13:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:53.677 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:53.677 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:53.677 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:53.678 Found net devices under 0000:af:00.0: cvl_0_0 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:53.678 Found net devices under 0000:af:00.1: cvl_0_1 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:53.678 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:53.938 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:53.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:53.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:35:53.938 00:35:53.938 --- 10.0.0.2 ping statistics --- 00:35:53.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:53.938 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:35:53.938 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:53.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:53.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:35:53.938 00:35:53.938 --- 10.0.0.1 ping statistics --- 00:35:53.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:53.938 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:35:53.938 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:53.938 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:35:53.938 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:53.938 10:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:57.230 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:35:57.230 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:57.230 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:57.230 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:57.230 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:57.231 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:57.231 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:57.231 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:57.231 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:57.231 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:57.231 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:57.231 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:57.231 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:57.231 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:57.231 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:57.231 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:57.231 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:58.169 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=369271 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 369271 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 369271 ']' 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:58.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:58.169 10:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:58.428 [2024-12-11 10:14:07.767561] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:35:58.428 [2024-12-11 10:14:07.767607] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:58.428 [2024-12-11 10:14:07.849691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:58.428 [2024-12-11 10:14:07.891802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:58.428 [2024-12-11 10:14:07.891841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:58.428 [2024-12-11 10:14:07.891849] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:58.428 [2024-12-11 10:14:07.891854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:58.428 [2024-12-11 10:14:07.891859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:58.428 [2024-12-11 10:14:07.893264] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:58.428 [2024-12-11 10:14:07.893372] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:35:58.428 [2024-12-11 10:14:07.893483] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:58.428 [2024-12-11 10:14:07.893484] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 0000:5f:00.0 ]] 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5f:00.0 ]] 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- scripts/common.sh@324 -- # continue 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:59.362 10:14:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:59.363 10:14:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:59.363 10:14:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:59.363 10:14:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:59.363 10:14:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:59.363 ************************************ 00:35:59.363 START TEST spdk_target_abort 00:35:59.363 ************************************ 00:35:59.363 10:14:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:59.363 10:14:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:59.363 10:14:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:59.363 10:14:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.363 10:14:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.647 spdk_targetn1 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.647 [2024-12-11 10:14:11.524629] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.647 [2024-12-11 10:14:11.564907] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:02.647 10:14:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:05.176 Initializing NVMe Controllers 00:36:05.176 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:05.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:05.176 Initialization complete. Launching workers. 00:36:05.176 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15342, failed: 0 00:36:05.176 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1367, failed to submit 13975 00:36:05.176 success 715, unsuccessful 652, failed 0 00:36:05.176 10:14:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:05.176 10:14:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:09.361 Initializing NVMe Controllers 00:36:09.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:09.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:09.361 Initialization complete. Launching workers. 00:36:09.361 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8854, failed: 0 00:36:09.361 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1229, failed to submit 7625 00:36:09.361 success 319, unsuccessful 910, failed 0 00:36:09.361 10:14:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:09.361 10:14:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:11.898 Initializing NVMe Controllers 00:36:11.898 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:11.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:11.898 Initialization complete. Launching workers. 00:36:11.898 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38681, failed: 0 00:36:11.898 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2826, failed to submit 35855 00:36:11.898 success 557, unsuccessful 2269, failed 0 00:36:11.898 10:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:11.898 10:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.898 10:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.898 10:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.898 10:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:11.898 10:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.898 10:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:13.395 10:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.395 10:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 369271 00:36:13.395 10:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 369271 ']' 00:36:13.395 10:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 369271 00:36:13.395 10:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:36:13.395 10:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:13.395 10:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 369271 00:36:13.395 10:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:13.395 10:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:13.395 10:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 369271' 00:36:13.395 killing process with pid 369271 00:36:13.395 10:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 369271 00:36:13.395 10:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 369271 00:36:13.395 00:36:13.395 real 0m14.144s 00:36:13.395 user 0m56.337s 00:36:13.395 sys 0m2.639s 00:36:13.395 10:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:13.396 ************************************ 00:36:13.396 END TEST spdk_target_abort 00:36:13.396 ************************************ 00:36:13.396 10:14:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:13.396 10:14:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:13.396 10:14:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:13.396 10:14:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:13.396 ************************************ 00:36:13.396 START TEST kernel_target_abort 00:36:13.396 ************************************ 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:13.396 10:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:16.685 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:36:16.685 Waiting for block devices as requested 00:36:16.685 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:16.944 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:16.944 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:16.944 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:17.203 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:17.203 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:17.203 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:17.462 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:17.462 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:17.462 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:17.462 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:17.722 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:17.722 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:17.722 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:17.981 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:17.981 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:17.981 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:18.241 No valid GPT data, bailing 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:36:18.241 No valid GPT data, bailing 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # continue 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:18.241 00:36:18.241 Discovery Log Number of Records 2, Generation counter 2 00:36:18.241 =====Discovery Log Entry 0====== 00:36:18.241 trtype: tcp 00:36:18.241 adrfam: ipv4 00:36:18.241 subtype: current discovery subsystem 00:36:18.241 treq: not specified, sq flow control disable supported 00:36:18.241 portid: 1 00:36:18.241 trsvcid: 4420 00:36:18.241 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:18.241 traddr: 10.0.0.1 00:36:18.241 eflags: none 00:36:18.241 sectype: none 00:36:18.241 =====Discovery Log Entry 1====== 00:36:18.241 trtype: tcp 00:36:18.241 adrfam: ipv4 00:36:18.241 subtype: nvme subsystem 00:36:18.241 treq: not specified, sq flow control disable supported 00:36:18.241 portid: 1 00:36:18.241 trsvcid: 4420 00:36:18.241 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:18.241 traddr: 10.0.0.1 00:36:18.241 eflags: none 00:36:18.241 sectype: none 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:18.241 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:18.242 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:18.242 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.242 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:18.242 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.242 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:18.242 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.242 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:18.242 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.242 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:18.242 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.242 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:18.242 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:18.242 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:21.532 Initializing NVMe Controllers 00:36:21.532 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:21.532 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:21.533 Initialization complete. Launching workers. 00:36:21.533 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 83639, failed: 0 00:36:21.533 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 83639, failed to submit 0 00:36:21.533 success 0, unsuccessful 83639, failed 0 00:36:21.533 10:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:21.533 10:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:24.821 Initializing NVMe Controllers 00:36:24.821 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:24.821 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:24.821 Initialization complete. Launching workers. 00:36:24.821 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 138673, failed: 0 00:36:24.821 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32278, failed to submit 106395 00:36:24.821 success 0, unsuccessful 32278, failed 0 00:36:24.821 10:14:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:24.821 10:14:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:28.109 Initializing NVMe Controllers 00:36:28.109 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:28.109 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:28.109 Initialization complete. Launching workers. 00:36:28.109 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 130181, failed: 0 00:36:28.109 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32562, failed to submit 97619 00:36:28.109 success 0, unsuccessful 32562, failed 0 00:36:28.109 10:14:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:28.109 10:14:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:28.109 10:14:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:36:28.109 10:14:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:28.109 10:14:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:28.109 10:14:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:28.109 10:14:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:28.109 10:14:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:28.109 10:14:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:28.109 10:14:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:30.646 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:36:30.905 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:30.905 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:30.905 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:31.165 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:31.165 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:31.165 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:31.165 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:31.165 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:31.165 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:31.165 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:31.165 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:31.165 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:31.165 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:31.165 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:31.165 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:31.165 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:32.103 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:32.103 00:36:32.103 real 0m18.666s 00:36:32.103 user 0m9.046s 00:36:32.103 sys 0m5.950s 00:36:32.103 10:14:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:32.103 10:14:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:32.103 ************************************ 00:36:32.103 END TEST kernel_target_abort 00:36:32.103 ************************************ 00:36:32.103 10:14:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:32.103 10:14:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:32.103 10:14:41 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:32.103 10:14:41 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:36:32.103 10:14:41 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:32.103 10:14:41 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:36:32.103 10:14:41 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:32.103 10:14:41 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:32.103 rmmod nvme_tcp 00:36:32.104 rmmod nvme_fabrics 00:36:32.104 rmmod nvme_keyring 00:36:32.104 10:14:41 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:32.104 10:14:41 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:36:32.104 10:14:41 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:36:32.104 10:14:41 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 369271 ']' 00:36:32.104 10:14:41 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 369271 00:36:32.104 10:14:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 369271 ']' 00:36:32.104 10:14:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 369271 00:36:32.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (369271) - No such process 00:36:32.363 10:14:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 369271 is not found' 00:36:32.363 Process with pid 369271 is not found 00:36:32.363 10:14:41 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:32.363 10:14:41 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:35.663 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:36:35.663 Waiting for block devices as requested 00:36:35.663 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:35.663 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:35.663 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:35.663 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:35.922 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:35.922 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:35.922 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:36.181 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:36.181 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:36.181 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:36.440 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:36.440 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:36.440 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:36.440 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:36.699 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:36.699 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:36.699 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:36.959 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:36.959 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:36.959 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:36.959 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:36:36.959 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:36.959 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:36:36.959 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:36.959 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:36.959 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:36.959 10:14:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:36.959 10:14:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:38.865 10:14:48 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:38.865 00:36:38.865 real 0m51.962s 00:36:38.865 user 1m10.585s 00:36:38.865 sys 0m18.538s 00:36:38.865 10:14:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:38.865 10:14:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:38.865 ************************************ 00:36:38.865 END TEST nvmf_abort_qd_sizes 00:36:38.865 ************************************ 00:36:38.865 10:14:48 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:38.866 10:14:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:38.866 10:14:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:38.866 10:14:48 -- common/autotest_common.sh@10 -- # set +x 00:36:39.125 ************************************ 00:36:39.125 START TEST keyring_file 00:36:39.125 ************************************ 00:36:39.125 10:14:48 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:39.125 * Looking for test storage... 00:36:39.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:39.125 10:14:48 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:39.125 10:14:48 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:36:39.125 10:14:48 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:39.125 10:14:48 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:39.125 10:14:48 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:39.125 10:14:48 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:39.125 10:14:48 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:39.125 10:14:48 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:39.125 10:14:48 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:39.125 10:14:48 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:39.125 10:14:48 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:39.125 10:14:48 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:39.125 10:14:48 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:39.125 10:14:48 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:39.125 10:14:48 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:39.125 10:14:48 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:39.125 10:14:48 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:39.125 10:14:48 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:39.126 10:14:48 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:39.126 10:14:48 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:39.126 10:14:48 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:39.126 10:14:48 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:39.126 10:14:48 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:39.126 10:14:48 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:39.126 10:14:48 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:39.126 10:14:48 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:39.126 10:14:48 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:39.126 10:14:48 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:39.126 10:14:48 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:39.126 10:14:48 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:39.126 10:14:48 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:39.126 10:14:48 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:39.126 10:14:48 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:39.126 10:14:48 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:39.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.126 --rc genhtml_branch_coverage=1 00:36:39.126 --rc genhtml_function_coverage=1 00:36:39.126 --rc genhtml_legend=1 00:36:39.126 --rc geninfo_all_blocks=1 00:36:39.126 --rc geninfo_unexecuted_blocks=1 00:36:39.126 00:36:39.126 ' 00:36:39.126 10:14:48 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:39.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.126 --rc genhtml_branch_coverage=1 00:36:39.126 --rc genhtml_function_coverage=1 00:36:39.126 --rc genhtml_legend=1 00:36:39.126 --rc geninfo_all_blocks=1 00:36:39.126 --rc geninfo_unexecuted_blocks=1 00:36:39.126 00:36:39.126 ' 00:36:39.126 10:14:48 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:39.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.126 --rc genhtml_branch_coverage=1 00:36:39.126 --rc genhtml_function_coverage=1 00:36:39.126 --rc genhtml_legend=1 00:36:39.126 --rc geninfo_all_blocks=1 00:36:39.126 --rc geninfo_unexecuted_blocks=1 00:36:39.126 00:36:39.126 ' 00:36:39.126 10:14:48 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:39.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.126 --rc genhtml_branch_coverage=1 00:36:39.126 --rc genhtml_function_coverage=1 00:36:39.126 --rc genhtml_legend=1 00:36:39.126 --rc geninfo_all_blocks=1 00:36:39.126 --rc geninfo_unexecuted_blocks=1 00:36:39.126 00:36:39.126 ' 00:36:39.126 10:14:48 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:39.126 10:14:48 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:39.126 10:14:48 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:39.126 10:14:48 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:39.126 10:14:48 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:39.126 10:14:48 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:39.126 10:14:48 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.126 10:14:48 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.126 10:14:48 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.126 10:14:48 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:39.126 10:14:48 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:39.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:39.126 10:14:48 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:39.126 10:14:48 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:39.126 10:14:48 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:39.126 10:14:48 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:39.126 10:14:48 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:39.126 10:14:48 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:39.126 10:14:48 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:39.126 10:14:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:39.126 10:14:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:39.126 10:14:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:39.126 10:14:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:39.126 10:14:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:39.126 10:14:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rkImIhxr0t 00:36:39.126 10:14:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:39.126 10:14:48 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:39.386 10:14:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rkImIhxr0t 00:36:39.386 10:14:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rkImIhxr0t 00:36:39.386 10:14:48 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.rkImIhxr0t 00:36:39.386 10:14:48 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:39.386 10:14:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:39.386 10:14:48 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:39.386 10:14:48 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:39.386 10:14:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:39.386 10:14:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:39.386 10:14:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.J8uwx3NVKk 00:36:39.386 10:14:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:39.386 10:14:48 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:39.386 10:14:48 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:39.386 10:14:48 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:39.386 10:14:48 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:39.386 10:14:48 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:39.386 10:14:48 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:39.386 10:14:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.J8uwx3NVKk 00:36:39.386 10:14:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.J8uwx3NVKk 00:36:39.386 10:14:48 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.J8uwx3NVKk 00:36:39.386 10:14:48 keyring_file -- keyring/file.sh@30 -- # tgtpid=378561 00:36:39.386 10:14:48 keyring_file -- keyring/file.sh@32 -- # waitforlisten 378561 00:36:39.386 10:14:48 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:39.386 10:14:48 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 378561 ']' 00:36:39.386 10:14:48 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:39.386 10:14:48 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:39.386 10:14:48 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:39.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:39.386 10:14:48 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:39.386 10:14:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:39.386 [2024-12-11 10:14:48.826107] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:36:39.386 [2024-12-11 10:14:48.826158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid378561 ] 00:36:39.386 [2024-12-11 10:14:48.904873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.386 [2024-12-11 10:14:48.945823] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:39.645 10:14:49 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:39.645 10:14:49 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:39.645 10:14:49 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:39.645 10:14:49 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.645 10:14:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:39.645 [2024-12-11 10:14:49.169663] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:39.645 null0 00:36:39.645 [2024-12-11 10:14:49.201711] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:39.645 [2024-12-11 10:14:49.201980] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:39.904 10:14:49 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.904 10:14:49 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:39.904 10:14:49 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:39.904 10:14:49 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:39.904 10:14:49 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:39.904 10:14:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:39.904 10:14:49 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:39.904 10:14:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:39.904 10:14:49 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:39.904 10:14:49 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.904 10:14:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:39.904 [2024-12-11 10:14:49.229782] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:39.904 request: 00:36:39.904 { 00:36:39.904 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:39.904 "secure_channel": false, 00:36:39.904 "listen_address": { 00:36:39.904 "trtype": "tcp", 00:36:39.904 "traddr": "127.0.0.1", 00:36:39.904 "trsvcid": "4420" 00:36:39.904 }, 00:36:39.904 "method": "nvmf_subsystem_add_listener", 00:36:39.904 "req_id": 1 00:36:39.904 } 00:36:39.904 Got JSON-RPC error response 00:36:39.904 response: 00:36:39.904 { 00:36:39.904 "code": -32602, 00:36:39.904 "message": "Invalid parameters" 00:36:39.904 } 00:36:39.904 10:14:49 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:39.904 10:14:49 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:39.904 10:14:49 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:39.904 10:14:49 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:39.904 10:14:49 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:39.904 10:14:49 keyring_file -- keyring/file.sh@47 -- # bperfpid=378774 00:36:39.904 10:14:49 keyring_file -- keyring/file.sh@49 -- # waitforlisten 378774 /var/tmp/bperf.sock 00:36:39.904 10:14:49 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:39.904 10:14:49 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 378774 ']' 00:36:39.905 10:14:49 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:39.905 10:14:49 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:39.905 10:14:49 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:39.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:39.905 10:14:49 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:39.905 10:14:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:39.905 [2024-12-11 10:14:49.283054] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:36:39.905 [2024-12-11 10:14:49.283096] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid378774 ] 00:36:39.905 [2024-12-11 10:14:49.361319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.905 [2024-12-11 10:14:49.401896] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:40.164 10:14:49 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:40.164 10:14:49 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:40.164 10:14:49 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rkImIhxr0t 00:36:40.164 10:14:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rkImIhxr0t 00:36:40.164 10:14:49 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.J8uwx3NVKk 00:36:40.164 10:14:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.J8uwx3NVKk 00:36:40.423 10:14:49 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:40.423 10:14:49 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:40.423 10:14:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:40.423 10:14:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:40.423 10:14:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:40.682 10:14:50 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.rkImIhxr0t == \/\t\m\p\/\t\m\p\.\r\k\I\m\I\h\x\r\0\t ]] 00:36:40.682 10:14:50 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:40.682 10:14:50 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:40.682 10:14:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:40.682 10:14:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:40.682 10:14:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:40.941 10:14:50 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.J8uwx3NVKk == \/\t\m\p\/\t\m\p\.\J\8\u\w\x\3\N\V\K\k ]] 00:36:40.942 10:14:50 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:40.942 10:14:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:40.942 10:14:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:40.942 10:14:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:40.942 10:14:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:40.942 10:14:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:40.942 10:14:50 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:40.942 10:14:50 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:40.942 10:14:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:40.942 10:14:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:40.942 10:14:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:40.942 10:14:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:40.942 10:14:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.200 10:14:50 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:41.200 10:14:50 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:41.200 10:14:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:41.459 [2024-12-11 10:14:50.829155] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:41.459 nvme0n1 00:36:41.459 10:14:50 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:41.459 10:14:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:41.459 10:14:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:41.459 10:14:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:41.459 10:14:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:41.459 10:14:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.718 10:14:51 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:41.718 10:14:51 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:41.718 10:14:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:41.718 10:14:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:41.718 10:14:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:41.718 10:14:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:41.718 10:14:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.977 10:14:51 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:41.977 10:14:51 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:41.977 Running I/O for 1 seconds... 00:36:42.915 19400.00 IOPS, 75.78 MiB/s 00:36:42.915 Latency(us) 00:36:42.915 [2024-12-11T09:14:52.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:42.915 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:42.915 nvme0n1 : 1.00 19444.74 75.96 0.00 0.00 6570.17 4275.44 17850.76 00:36:42.915 [2024-12-11T09:14:52.490Z] =================================================================================================================== 00:36:42.915 [2024-12-11T09:14:52.490Z] Total : 19444.74 75.96 0.00 0.00 6570.17 4275.44 17850.76 00:36:42.915 { 00:36:42.915 "results": [ 00:36:42.915 { 00:36:42.915 "job": "nvme0n1", 00:36:42.915 "core_mask": "0x2", 00:36:42.915 "workload": "randrw", 00:36:42.915 "percentage": 50, 00:36:42.915 "status": "finished", 00:36:42.915 "queue_depth": 128, 00:36:42.915 "io_size": 4096, 00:36:42.915 "runtime": 1.004282, 00:36:42.915 "iops": 19444.73763345355, 00:36:42.915 "mibps": 75.95600638067793, 00:36:42.915 "io_failed": 0, 00:36:42.915 "io_timeout": 0, 00:36:42.915 "avg_latency_us": 6570.169059518932, 00:36:42.915 "min_latency_us": 4275.443809523809, 00:36:42.915 "max_latency_us": 17850.758095238096 00:36:42.915 } 00:36:42.915 ], 00:36:42.915 "core_count": 1 00:36:42.915 } 00:36:42.915 10:14:52 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:42.915 10:14:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:43.175 10:14:52 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:43.175 10:14:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:43.175 10:14:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:43.175 10:14:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:43.175 10:14:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:43.175 10:14:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:43.434 10:14:52 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:43.434 10:14:52 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:43.434 10:14:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:43.434 10:14:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:43.434 10:14:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:43.434 10:14:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:43.434 10:14:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:43.693 10:14:53 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:43.693 10:14:53 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:43.693 10:14:53 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:43.693 10:14:53 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:43.693 10:14:53 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:43.693 10:14:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:43.693 10:14:53 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:43.693 10:14:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:43.693 10:14:53 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:43.693 10:14:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:43.693 [2024-12-11 10:14:53.193734] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:43.693 [2024-12-11 10:14:53.194565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd007d0 (107): Transport endpoint is not connected 00:36:43.693 [2024-12-11 10:14:53.195560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd007d0 (9): Bad file descriptor 00:36:43.693 [2024-12-11 10:14:53.196562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:43.693 [2024-12-11 10:14:53.196573] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:43.693 [2024-12-11 10:14:53.196581] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:43.693 [2024-12-11 10:14:53.196589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:43.693 request: 00:36:43.693 { 00:36:43.693 "name": "nvme0", 00:36:43.693 "trtype": "tcp", 00:36:43.693 "traddr": "127.0.0.1", 00:36:43.693 "adrfam": "ipv4", 00:36:43.693 "trsvcid": "4420", 00:36:43.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:43.693 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:43.693 "prchk_reftag": false, 00:36:43.693 "prchk_guard": false, 00:36:43.693 "hdgst": false, 00:36:43.693 "ddgst": false, 00:36:43.693 "psk": "key1", 00:36:43.693 "allow_unrecognized_csi": false, 00:36:43.693 "method": "bdev_nvme_attach_controller", 00:36:43.693 "req_id": 1 00:36:43.693 } 00:36:43.693 Got JSON-RPC error response 00:36:43.693 response: 00:36:43.693 { 00:36:43.693 "code": -5, 00:36:43.693 "message": "Input/output error" 00:36:43.693 } 00:36:43.693 10:14:53 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:43.693 10:14:53 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:43.693 10:14:53 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:43.693 10:14:53 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:43.693 10:14:53 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:43.693 10:14:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:43.693 10:14:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:43.693 10:14:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:43.693 10:14:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:43.693 10:14:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:43.952 10:14:53 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:43.952 10:14:53 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:43.952 10:14:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:43.952 10:14:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:43.952 10:14:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:43.953 10:14:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:43.953 10:14:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:44.211 10:14:53 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:44.211 10:14:53 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:44.211 10:14:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:44.471 10:14:53 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:44.471 10:14:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:44.471 10:14:54 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:44.471 10:14:54 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:44.471 10:14:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.730 10:14:54 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:44.730 10:14:54 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.rkImIhxr0t 00:36:44.730 10:14:54 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.rkImIhxr0t 00:36:44.730 10:14:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:44.730 10:14:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.rkImIhxr0t 00:36:44.730 10:14:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:44.730 10:14:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:44.730 10:14:54 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:44.730 10:14:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:44.730 10:14:54 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rkImIhxr0t 00:36:44.730 10:14:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rkImIhxr0t 00:36:44.988 [2024-12-11 10:14:54.383769] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rkImIhxr0t': 0100660 00:36:44.988 [2024-12-11 10:14:54.383793] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:44.988 request: 00:36:44.988 { 00:36:44.988 "name": "key0", 00:36:44.988 "path": "/tmp/tmp.rkImIhxr0t", 00:36:44.988 "method": "keyring_file_add_key", 00:36:44.988 "req_id": 1 00:36:44.988 } 00:36:44.988 Got JSON-RPC error response 00:36:44.988 response: 00:36:44.988 { 00:36:44.988 "code": -1, 00:36:44.988 "message": "Operation not permitted" 00:36:44.988 } 00:36:44.988 10:14:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:44.988 10:14:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:44.988 10:14:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:44.988 10:14:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:44.988 10:14:54 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.rkImIhxr0t 00:36:44.988 10:14:54 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rkImIhxr0t 00:36:44.988 10:14:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rkImIhxr0t 00:36:45.247 10:14:54 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.rkImIhxr0t 00:36:45.247 10:14:54 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:45.247 10:14:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:45.247 10:14:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:45.247 10:14:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.247 10:14:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:45.247 10:14:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.247 10:14:54 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:45.247 10:14:54 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:45.247 10:14:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:45.247 10:14:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:45.247 10:14:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:45.247 10:14:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:45.247 10:14:54 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:45.247 10:14:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:45.247 10:14:54 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:45.247 10:14:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:45.506 [2024-12-11 10:14:54.969315] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.rkImIhxr0t': No such file or directory 00:36:45.506 [2024-12-11 10:14:54.969335] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:45.506 [2024-12-11 10:14:54.969349] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:45.506 [2024-12-11 10:14:54.969356] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:45.506 [2024-12-11 10:14:54.969363] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:45.506 [2024-12-11 10:14:54.969369] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:45.506 request: 00:36:45.506 { 00:36:45.506 "name": "nvme0", 00:36:45.506 "trtype": "tcp", 00:36:45.506 "traddr": "127.0.0.1", 00:36:45.506 "adrfam": "ipv4", 00:36:45.506 "trsvcid": "4420", 00:36:45.506 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:45.506 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:45.506 "prchk_reftag": false, 00:36:45.506 "prchk_guard": false, 00:36:45.506 "hdgst": false, 00:36:45.506 "ddgst": false, 00:36:45.506 "psk": "key0", 00:36:45.506 "allow_unrecognized_csi": false, 00:36:45.506 "method": "bdev_nvme_attach_controller", 00:36:45.506 "req_id": 1 00:36:45.506 } 00:36:45.506 Got JSON-RPC error response 00:36:45.506 response: 00:36:45.506 { 00:36:45.506 "code": -19, 00:36:45.506 "message": "No such device" 00:36:45.506 } 00:36:45.506 10:14:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:45.506 10:14:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:45.506 10:14:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:45.506 10:14:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:45.506 10:14:54 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:45.506 10:14:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:45.765 10:14:55 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:45.765 10:14:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:45.765 10:14:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:45.765 10:14:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:45.765 10:14:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:45.765 10:14:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:45.765 10:14:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qI00dRIJKo 00:36:45.765 10:14:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:45.765 10:14:55 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:45.765 10:14:55 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:45.765 10:14:55 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:45.765 10:14:55 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:45.765 10:14:55 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:45.765 10:14:55 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:45.765 10:14:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qI00dRIJKo 00:36:45.765 10:14:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qI00dRIJKo 00:36:45.765 10:14:55 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.qI00dRIJKo 00:36:45.765 10:14:55 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qI00dRIJKo 00:36:45.765 10:14:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qI00dRIJKo 00:36:46.024 10:14:55 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:46.024 10:14:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:46.283 nvme0n1 00:36:46.283 10:14:55 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:46.283 10:14:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:46.283 10:14:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:46.283 10:14:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.283 10:14:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.283 10:14:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:46.283 10:14:55 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:46.283 10:14:55 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:46.283 10:14:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:46.542 10:14:56 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:46.542 10:14:56 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:46.542 10:14:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.542 10:14:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:46.542 10:14:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.801 10:14:56 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:46.801 10:14:56 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:46.802 10:14:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:46.802 10:14:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:46.802 10:14:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.802 10:14:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:46.802 10:14:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:47.060 10:14:56 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:47.060 10:14:56 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:47.060 10:14:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:47.319 10:14:56 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:47.319 10:14:56 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:47.319 10:14:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:47.319 10:14:56 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:47.319 10:14:56 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qI00dRIJKo 00:36:47.319 10:14:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qI00dRIJKo 00:36:47.578 10:14:57 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.J8uwx3NVKk 00:36:47.578 10:14:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.J8uwx3NVKk 00:36:47.836 10:14:57 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:47.836 10:14:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:48.094 nvme0n1 00:36:48.094 10:14:57 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:48.095 10:14:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:48.354 10:14:57 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:48.354 "subsystems": [ 00:36:48.354 { 00:36:48.354 "subsystem": "keyring", 00:36:48.354 "config": [ 00:36:48.354 { 00:36:48.354 "method": "keyring_file_add_key", 00:36:48.354 "params": { 00:36:48.354 "name": "key0", 00:36:48.354 "path": "/tmp/tmp.qI00dRIJKo" 00:36:48.354 } 00:36:48.354 }, 00:36:48.354 { 00:36:48.354 "method": "keyring_file_add_key", 00:36:48.354 "params": { 00:36:48.354 "name": "key1", 00:36:48.354 "path": "/tmp/tmp.J8uwx3NVKk" 00:36:48.354 } 00:36:48.354 } 00:36:48.354 ] 00:36:48.354 }, 00:36:48.354 { 00:36:48.354 "subsystem": "iobuf", 00:36:48.354 "config": [ 00:36:48.354 { 00:36:48.354 "method": "iobuf_set_options", 00:36:48.354 "params": { 00:36:48.354 "small_pool_count": 8192, 00:36:48.354 "large_pool_count": 1024, 00:36:48.354 "small_bufsize": 8192, 00:36:48.354 "large_bufsize": 135168, 00:36:48.354 "enable_numa": false 00:36:48.354 } 00:36:48.354 } 00:36:48.354 ] 00:36:48.354 }, 00:36:48.354 { 00:36:48.354 "subsystem": "sock", 00:36:48.354 "config": [ 00:36:48.354 { 00:36:48.354 "method": "sock_set_default_impl", 00:36:48.354 "params": { 00:36:48.354 "impl_name": "posix" 00:36:48.354 } 00:36:48.354 }, 00:36:48.354 { 00:36:48.354 "method": "sock_impl_set_options", 00:36:48.354 "params": { 00:36:48.354 "impl_name": "ssl", 00:36:48.354 "recv_buf_size": 4096, 00:36:48.354 "send_buf_size": 4096, 00:36:48.354 "enable_recv_pipe": true, 00:36:48.354 "enable_quickack": false, 00:36:48.354 "enable_placement_id": 0, 00:36:48.354 "enable_zerocopy_send_server": true, 00:36:48.354 "enable_zerocopy_send_client": false, 00:36:48.354 "zerocopy_threshold": 0, 00:36:48.354 "tls_version": 0, 00:36:48.354 "enable_ktls": false 00:36:48.354 } 00:36:48.354 }, 00:36:48.354 { 00:36:48.354 "method": "sock_impl_set_options", 00:36:48.354 "params": { 00:36:48.354 "impl_name": "posix", 00:36:48.354 "recv_buf_size": 2097152, 00:36:48.354 "send_buf_size": 2097152, 00:36:48.354 "enable_recv_pipe": true, 00:36:48.354 "enable_quickack": false, 00:36:48.354 "enable_placement_id": 0, 00:36:48.354 "enable_zerocopy_send_server": true, 00:36:48.354 "enable_zerocopy_send_client": false, 00:36:48.354 "zerocopy_threshold": 0, 00:36:48.355 "tls_version": 0, 00:36:48.355 "enable_ktls": false 00:36:48.355 } 00:36:48.355 } 00:36:48.355 ] 00:36:48.355 }, 00:36:48.355 { 00:36:48.355 "subsystem": "vmd", 00:36:48.355 "config": [] 00:36:48.355 }, 00:36:48.355 { 00:36:48.355 "subsystem": "accel", 00:36:48.355 "config": [ 00:36:48.355 { 00:36:48.355 "method": "accel_set_options", 00:36:48.355 "params": { 00:36:48.355 "small_cache_size": 128, 00:36:48.355 "large_cache_size": 16, 00:36:48.355 "task_count": 2048, 00:36:48.355 "sequence_count": 2048, 00:36:48.355 "buf_count": 2048 00:36:48.355 } 00:36:48.355 } 00:36:48.355 ] 00:36:48.355 }, 00:36:48.355 { 00:36:48.355 "subsystem": "bdev", 00:36:48.355 "config": [ 00:36:48.355 { 00:36:48.355 "method": "bdev_set_options", 00:36:48.355 "params": { 00:36:48.355 "bdev_io_pool_size": 65535, 00:36:48.355 "bdev_io_cache_size": 256, 00:36:48.355 "bdev_auto_examine": true, 00:36:48.355 "iobuf_small_cache_size": 128, 00:36:48.355 "iobuf_large_cache_size": 16 00:36:48.355 } 00:36:48.355 }, 00:36:48.355 { 00:36:48.355 "method": "bdev_raid_set_options", 00:36:48.355 "params": { 00:36:48.355 "process_window_size_kb": 1024, 00:36:48.355 "process_max_bandwidth_mb_sec": 0 00:36:48.355 } 00:36:48.355 }, 00:36:48.355 { 00:36:48.355 "method": "bdev_iscsi_set_options", 00:36:48.355 "params": { 00:36:48.355 "timeout_sec": 30 00:36:48.355 } 00:36:48.355 }, 00:36:48.355 { 00:36:48.355 "method": "bdev_nvme_set_options", 00:36:48.355 "params": { 00:36:48.355 "action_on_timeout": "none", 00:36:48.355 "timeout_us": 0, 00:36:48.355 "timeout_admin_us": 0, 00:36:48.355 "keep_alive_timeout_ms": 10000, 00:36:48.355 "arbitration_burst": 0, 00:36:48.355 "low_priority_weight": 0, 00:36:48.355 "medium_priority_weight": 0, 00:36:48.355 "high_priority_weight": 0, 00:36:48.355 "nvme_adminq_poll_period_us": 10000, 00:36:48.355 "nvme_ioq_poll_period_us": 0, 00:36:48.355 "io_queue_requests": 512, 00:36:48.355 "delay_cmd_submit": true, 00:36:48.355 "transport_retry_count": 4, 00:36:48.355 "bdev_retry_count": 3, 00:36:48.355 "transport_ack_timeout": 0, 00:36:48.355 "ctrlr_loss_timeout_sec": 0, 00:36:48.355 "reconnect_delay_sec": 0, 00:36:48.355 "fast_io_fail_timeout_sec": 0, 00:36:48.355 "disable_auto_failback": false, 00:36:48.355 "generate_uuids": false, 00:36:48.355 "transport_tos": 0, 00:36:48.355 "nvme_error_stat": false, 00:36:48.355 "rdma_srq_size": 0, 00:36:48.355 "io_path_stat": false, 00:36:48.355 "allow_accel_sequence": false, 00:36:48.355 "rdma_max_cq_size": 0, 00:36:48.355 "rdma_cm_event_timeout_ms": 0, 00:36:48.355 "dhchap_digests": [ 00:36:48.355 "sha256", 00:36:48.355 "sha384", 00:36:48.355 "sha512" 00:36:48.355 ], 00:36:48.355 "dhchap_dhgroups": [ 00:36:48.355 "null", 00:36:48.355 "ffdhe2048", 00:36:48.355 "ffdhe3072", 00:36:48.355 "ffdhe4096", 00:36:48.355 "ffdhe6144", 00:36:48.355 "ffdhe8192" 00:36:48.355 ], 00:36:48.355 "rdma_umr_per_io": false 00:36:48.355 } 00:36:48.355 }, 00:36:48.355 { 00:36:48.355 "method": "bdev_nvme_attach_controller", 00:36:48.355 "params": { 00:36:48.355 "name": "nvme0", 00:36:48.355 "trtype": "TCP", 00:36:48.355 "adrfam": "IPv4", 00:36:48.355 "traddr": "127.0.0.1", 00:36:48.355 "trsvcid": "4420", 00:36:48.355 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:48.355 "prchk_reftag": false, 00:36:48.355 "prchk_guard": false, 00:36:48.355 "ctrlr_loss_timeout_sec": 0, 00:36:48.355 "reconnect_delay_sec": 0, 00:36:48.355 "fast_io_fail_timeout_sec": 0, 00:36:48.355 "psk": "key0", 00:36:48.355 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:48.355 "hdgst": false, 00:36:48.355 "ddgst": false, 00:36:48.355 "multipath": "multipath" 00:36:48.355 } 00:36:48.355 }, 00:36:48.355 { 00:36:48.355 "method": "bdev_nvme_set_hotplug", 00:36:48.355 "params": { 00:36:48.355 "period_us": 100000, 00:36:48.355 "enable": false 00:36:48.355 } 00:36:48.355 }, 00:36:48.355 { 00:36:48.355 "method": "bdev_wait_for_examine" 00:36:48.355 } 00:36:48.355 ] 00:36:48.355 }, 00:36:48.355 { 00:36:48.355 "subsystem": "nbd", 00:36:48.355 "config": [] 00:36:48.355 } 00:36:48.355 ] 00:36:48.355 }' 00:36:48.355 10:14:57 keyring_file -- keyring/file.sh@115 -- # killprocess 378774 00:36:48.355 10:14:57 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 378774 ']' 00:36:48.355 10:14:57 keyring_file -- common/autotest_common.sh@958 -- # kill -0 378774 00:36:48.355 10:14:57 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:48.355 10:14:57 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:48.355 10:14:57 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 378774 00:36:48.355 10:14:57 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:48.355 10:14:57 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:48.355 10:14:57 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 378774' 00:36:48.355 killing process with pid 378774 00:36:48.355 10:14:57 keyring_file -- common/autotest_common.sh@973 -- # kill 378774 00:36:48.355 Received shutdown signal, test time was about 1.000000 seconds 00:36:48.355 00:36:48.355 Latency(us) 00:36:48.355 [2024-12-11T09:14:57.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:48.355 [2024-12-11T09:14:57.930Z] =================================================================================================================== 00:36:48.355 [2024-12-11T09:14:57.930Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:48.355 10:14:57 keyring_file -- common/autotest_common.sh@978 -- # wait 378774 00:36:48.615 10:14:57 keyring_file -- keyring/file.sh@118 -- # bperfpid=380274 00:36:48.615 10:14:57 keyring_file -- keyring/file.sh@120 -- # waitforlisten 380274 /var/tmp/bperf.sock 00:36:48.615 10:14:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 380274 ']' 00:36:48.615 10:14:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:48.615 10:14:57 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:48.615 10:14:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:48.615 10:14:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:48.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:48.615 10:14:57 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:48.615 "subsystems": [ 00:36:48.615 { 00:36:48.615 "subsystem": "keyring", 00:36:48.615 "config": [ 00:36:48.615 { 00:36:48.615 "method": "keyring_file_add_key", 00:36:48.615 "params": { 00:36:48.615 "name": "key0", 00:36:48.615 "path": "/tmp/tmp.qI00dRIJKo" 00:36:48.615 } 00:36:48.615 }, 00:36:48.615 { 00:36:48.615 "method": "keyring_file_add_key", 00:36:48.615 "params": { 00:36:48.615 "name": "key1", 00:36:48.615 "path": "/tmp/tmp.J8uwx3NVKk" 00:36:48.615 } 00:36:48.615 } 00:36:48.615 ] 00:36:48.615 }, 00:36:48.615 { 00:36:48.615 "subsystem": "iobuf", 00:36:48.615 "config": [ 00:36:48.615 { 00:36:48.615 "method": "iobuf_set_options", 00:36:48.615 "params": { 00:36:48.615 "small_pool_count": 8192, 00:36:48.615 "large_pool_count": 1024, 00:36:48.615 "small_bufsize": 8192, 00:36:48.615 "large_bufsize": 135168, 00:36:48.615 "enable_numa": false 00:36:48.615 } 00:36:48.615 } 00:36:48.615 ] 00:36:48.615 }, 00:36:48.615 { 00:36:48.615 "subsystem": "sock", 00:36:48.615 "config": [ 00:36:48.615 { 00:36:48.615 "method": "sock_set_default_impl", 00:36:48.615 "params": { 00:36:48.615 "impl_name": "posix" 00:36:48.615 } 00:36:48.615 }, 00:36:48.615 { 00:36:48.615 "method": "sock_impl_set_options", 00:36:48.615 "params": { 00:36:48.615 "impl_name": "ssl", 00:36:48.615 "recv_buf_size": 4096, 00:36:48.615 "send_buf_size": 4096, 00:36:48.615 "enable_recv_pipe": true, 00:36:48.615 "enable_quickack": false, 00:36:48.615 "enable_placement_id": 0, 00:36:48.615 "enable_zerocopy_send_server": true, 00:36:48.615 "enable_zerocopy_send_client": false, 00:36:48.615 "zerocopy_threshold": 0, 00:36:48.615 "tls_version": 0, 00:36:48.615 "enable_ktls": false 00:36:48.615 } 00:36:48.615 }, 00:36:48.615 { 00:36:48.615 "method": "sock_impl_set_options", 00:36:48.615 "params": { 00:36:48.615 "impl_name": "posix", 00:36:48.615 "recv_buf_size": 2097152, 00:36:48.615 "send_buf_size": 2097152, 00:36:48.615 "enable_recv_pipe": true, 00:36:48.615 "enable_quickack": false, 00:36:48.615 "enable_placement_id": 0, 00:36:48.615 "enable_zerocopy_send_server": true, 00:36:48.615 "enable_zerocopy_send_client": false, 00:36:48.615 "zerocopy_threshold": 0, 00:36:48.615 "tls_version": 0, 00:36:48.615 "enable_ktls": false 00:36:48.615 } 00:36:48.615 } 00:36:48.615 ] 00:36:48.615 }, 00:36:48.615 { 00:36:48.615 "subsystem": "vmd", 00:36:48.615 "config": [] 00:36:48.615 }, 00:36:48.615 { 00:36:48.615 "subsystem": "accel", 00:36:48.615 "config": [ 00:36:48.615 { 00:36:48.615 "method": "accel_set_options", 00:36:48.615 "params": { 00:36:48.615 "small_cache_size": 128, 00:36:48.615 "large_cache_size": 16, 00:36:48.615 "task_count": 2048, 00:36:48.615 "sequence_count": 2048, 00:36:48.615 "buf_count": 2048 00:36:48.615 } 00:36:48.615 } 00:36:48.616 ] 00:36:48.616 }, 00:36:48.616 { 00:36:48.616 "subsystem": "bdev", 00:36:48.616 "config": [ 00:36:48.616 { 00:36:48.616 "method": "bdev_set_options", 00:36:48.616 "params": { 00:36:48.616 "bdev_io_pool_size": 65535, 00:36:48.616 "bdev_io_cache_size": 256, 00:36:48.616 "bdev_auto_examine": true, 00:36:48.616 "iobuf_small_cache_size": 128, 00:36:48.616 "iobuf_large_cache_size": 16 00:36:48.616 } 00:36:48.616 }, 00:36:48.616 { 00:36:48.616 "method": "bdev_raid_set_options", 00:36:48.616 "params": { 00:36:48.616 "process_window_size_kb": 1024, 00:36:48.616 "process_max_bandwidth_mb_sec": 0 00:36:48.616 } 00:36:48.616 }, 00:36:48.616 { 00:36:48.616 "method": "bdev_iscsi_set_options", 00:36:48.616 "params": { 00:36:48.616 "timeout_sec": 30 00:36:48.616 } 00:36:48.616 }, 00:36:48.616 { 00:36:48.616 "method": "bdev_nvme_set_options", 00:36:48.616 "params": { 00:36:48.616 "action_on_timeout": "none", 00:36:48.616 "timeout_us": 0, 00:36:48.616 "timeout_admin_us": 0, 00:36:48.616 "keep_alive_timeout_ms": 10000, 00:36:48.616 "arbitration_burst": 0, 00:36:48.616 "low_priority_weight": 0, 00:36:48.616 "medium_priority_weight": 0, 00:36:48.616 "high_priority_weight": 0, 00:36:48.616 "nvme_adminq_poll_period_us": 10000, 00:36:48.616 "nvme_ioq_poll_period_us": 0, 00:36:48.616 "io_queue_requests": 512, 00:36:48.616 "delay_cmd_submit": true, 00:36:48.616 "transport_retry_count": 4, 00:36:48.616 "bdev_retry_count": 3, 00:36:48.616 "transport_ack_timeout": 0, 00:36:48.616 "ctrlr_loss_timeout_sec": 0, 00:36:48.616 "reconnect_delay_sec": 0, 00:36:48.616 "fast_io_fail_timeout_sec": 0, 00:36:48.616 "disable_auto_failback": false, 00:36:48.616 "generate_uuids": false, 00:36:48.616 "transport_tos": 0, 00:36:48.616 "nvme_error_stat": false, 00:36:48.616 "rdma_srq_size": 0, 00:36:48.616 "io_path_stat": false, 00:36:48.616 "allow_accel_sequence": false, 00:36:48.616 "rdma_max_cq_size": 0, 00:36:48.616 "rdma_cm_event_timeout_ms": 0, 00:36:48.616 "dhchap_digests": [ 00:36:48.616 "sha256", 00:36:48.616 "sha384", 00:36:48.616 "sha512" 00:36:48.616 ], 00:36:48.616 "dhchap_dhgroups": [ 00:36:48.616 "null", 00:36:48.616 "ffdhe2048", 00:36:48.616 "ffdhe3072", 00:36:48.616 "ffdhe4096", 00:36:48.616 "ffdhe6144", 00:36:48.616 "ffdhe8192" 00:36:48.616 ], 00:36:48.616 "rdma_umr_per_io": false 00:36:48.616 } 00:36:48.616 }, 00:36:48.616 { 00:36:48.616 "method": "bdev_nvme_attach_controller", 00:36:48.616 "params": { 00:36:48.616 "name": "nvme0", 00:36:48.616 "trtype": "TCP", 00:36:48.616 "adrfam": "IPv4", 00:36:48.616 "traddr": "127.0.0.1", 00:36:48.616 "trsvcid": "4420", 00:36:48.616 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:48.616 "prchk_reftag": false, 00:36:48.616 "prchk_guard": false, 00:36:48.616 "ctrlr_loss_timeout_sec": 0, 00:36:48.616 "reconnect_delay_sec": 0, 00:36:48.616 "fast_io_fail_timeout_sec": 0, 00:36:48.616 "psk": "key0", 00:36:48.616 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:48.616 "hdgst": false, 00:36:48.616 "ddgst": false, 00:36:48.616 "multipath": "multipath" 00:36:48.616 } 00:36:48.616 }, 00:36:48.616 { 00:36:48.616 "method": "bdev_nvme_set_hotplug", 00:36:48.616 "params": { 00:36:48.616 "period_us": 100000, 00:36:48.616 "enable": false 00:36:48.616 } 00:36:48.616 }, 00:36:48.616 { 00:36:48.616 "method": "bdev_wait_for_examine" 00:36:48.616 } 00:36:48.616 ] 00:36:48.616 }, 00:36:48.616 { 00:36:48.616 "subsystem": "nbd", 00:36:48.616 "config": [] 00:36:48.616 } 00:36:48.616 ] 00:36:48.616 }' 00:36:48.616 10:14:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:48.616 10:14:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:48.616 [2024-12-11 10:14:57.985211] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:36:48.616 [2024-12-11 10:14:57.985266] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380274 ] 00:36:48.616 [2024-12-11 10:14:58.062738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.616 [2024-12-11 10:14:58.102822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:48.875 [2024-12-11 10:14:58.264381] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:49.441 10:14:58 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:49.441 10:14:58 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:49.441 10:14:58 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:49.441 10:14:58 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:49.441 10:14:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.700 10:14:59 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:49.700 10:14:59 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:49.700 10:14:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:49.700 10:14:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:49.700 10:14:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.700 10:14:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:49.700 10:14:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.700 10:14:59 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:49.700 10:14:59 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:49.700 10:14:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:49.700 10:14:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:49.700 10:14:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.700 10:14:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:49.700 10:14:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.958 10:14:59 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:49.958 10:14:59 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:49.958 10:14:59 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:49.958 10:14:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:50.216 10:14:59 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:50.216 10:14:59 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:50.216 10:14:59 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.qI00dRIJKo /tmp/tmp.J8uwx3NVKk 00:36:50.216 10:14:59 keyring_file -- keyring/file.sh@20 -- # killprocess 380274 00:36:50.216 10:14:59 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 380274 ']' 00:36:50.216 10:14:59 keyring_file -- common/autotest_common.sh@958 -- # kill -0 380274 00:36:50.216 10:14:59 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:50.216 10:14:59 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:50.216 10:14:59 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 380274 00:36:50.216 10:14:59 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:50.216 10:14:59 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:50.216 10:14:59 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 380274' 00:36:50.216 killing process with pid 380274 00:36:50.216 10:14:59 keyring_file -- common/autotest_common.sh@973 -- # kill 380274 00:36:50.216 Received shutdown signal, test time was about 1.000000 seconds 00:36:50.216 00:36:50.216 Latency(us) 00:36:50.216 [2024-12-11T09:14:59.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:50.216 [2024-12-11T09:14:59.791Z] =================================================================================================================== 00:36:50.216 [2024-12-11T09:14:59.791Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:50.216 10:14:59 keyring_file -- common/autotest_common.sh@978 -- # wait 380274 00:36:50.475 10:14:59 keyring_file -- keyring/file.sh@21 -- # killprocess 378561 00:36:50.475 10:14:59 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 378561 ']' 00:36:50.475 10:14:59 keyring_file -- common/autotest_common.sh@958 -- # kill -0 378561 00:36:50.475 10:14:59 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:50.475 10:14:59 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:50.475 10:14:59 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 378561 00:36:50.475 10:14:59 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:50.475 10:14:59 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:50.475 10:14:59 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 378561' 00:36:50.475 killing process with pid 378561 00:36:50.475 10:14:59 keyring_file -- common/autotest_common.sh@973 -- # kill 378561 00:36:50.475 10:14:59 keyring_file -- common/autotest_common.sh@978 -- # wait 378561 00:36:50.734 00:36:50.734 real 0m11.725s 00:36:50.734 user 0m29.126s 00:36:50.734 sys 0m2.702s 00:36:50.734 10:15:00 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:50.734 10:15:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:50.734 ************************************ 00:36:50.734 END TEST keyring_file 00:36:50.734 ************************************ 00:36:50.734 10:15:00 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:36:50.734 10:15:00 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:50.734 10:15:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:50.734 10:15:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:50.734 10:15:00 -- common/autotest_common.sh@10 -- # set +x 00:36:50.734 ************************************ 00:36:50.734 START TEST keyring_linux 00:36:50.734 ************************************ 00:36:50.734 10:15:00 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:50.734 Joined session keyring: 724522006 00:36:50.993 * Looking for test storage... 00:36:50.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:50.993 10:15:00 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:50.993 10:15:00 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:36:50.993 10:15:00 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:50.993 10:15:00 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:50.993 10:15:00 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:50.993 10:15:00 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:50.993 10:15:00 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:50.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.993 --rc genhtml_branch_coverage=1 00:36:50.993 --rc genhtml_function_coverage=1 00:36:50.993 --rc genhtml_legend=1 00:36:50.993 --rc geninfo_all_blocks=1 00:36:50.993 --rc geninfo_unexecuted_blocks=1 00:36:50.993 00:36:50.993 ' 00:36:50.993 10:15:00 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:50.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.993 --rc genhtml_branch_coverage=1 00:36:50.993 --rc genhtml_function_coverage=1 00:36:50.993 --rc genhtml_legend=1 00:36:50.993 --rc geninfo_all_blocks=1 00:36:50.993 --rc geninfo_unexecuted_blocks=1 00:36:50.993 00:36:50.993 ' 00:36:50.993 10:15:00 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:50.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.993 --rc genhtml_branch_coverage=1 00:36:50.993 --rc genhtml_function_coverage=1 00:36:50.993 --rc genhtml_legend=1 00:36:50.993 --rc geninfo_all_blocks=1 00:36:50.993 --rc geninfo_unexecuted_blocks=1 00:36:50.993 00:36:50.993 ' 00:36:50.993 10:15:00 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:50.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.993 --rc genhtml_branch_coverage=1 00:36:50.993 --rc genhtml_function_coverage=1 00:36:50.993 --rc genhtml_legend=1 00:36:50.993 --rc geninfo_all_blocks=1 00:36:50.993 --rc geninfo_unexecuted_blocks=1 00:36:50.993 00:36:50.993 ' 00:36:50.993 10:15:00 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:50.993 10:15:00 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:50.993 10:15:00 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:50.993 10:15:00 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:50.993 10:15:00 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:50.993 10:15:00 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:50.993 10:15:00 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:50.993 10:15:00 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:50.993 10:15:00 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:50.993 10:15:00 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:50.993 10:15:00 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:50.993 10:15:00 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:50.993 10:15:00 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:50.993 10:15:00 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:36:50.993 10:15:00 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:36:50.993 10:15:00 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:50.993 10:15:00 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:50.993 10:15:00 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:50.993 10:15:00 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:50.994 10:15:00 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:50.994 10:15:00 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:50.994 10:15:00 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:50.994 10:15:00 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:50.994 10:15:00 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.994 10:15:00 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.994 10:15:00 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.994 10:15:00 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:50.994 10:15:00 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:50.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:50.994 10:15:00 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:50.994 10:15:00 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:50.994 10:15:00 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:50.994 10:15:00 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:50.994 10:15:00 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:50.994 10:15:00 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:50.994 10:15:00 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:50.994 10:15:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:50.994 10:15:00 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:50.994 10:15:00 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:50.994 10:15:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:50.994 10:15:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:50.994 10:15:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:50.994 10:15:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:50.994 10:15:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:50.994 /tmp/:spdk-test:key0 00:36:50.994 10:15:00 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:50.994 10:15:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:50.994 10:15:00 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:50.994 10:15:00 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:50.994 10:15:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:50.994 10:15:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:50.994 10:15:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:50.994 10:15:00 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:50.994 10:15:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:50.994 10:15:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:50.994 /tmp/:spdk-test:key1 00:36:50.994 10:15:00 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=380739 00:36:50.994 10:15:00 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 380739 00:36:50.994 10:15:00 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:50.994 10:15:00 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 380739 ']' 00:36:50.994 10:15:00 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:50.994 10:15:00 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:50.994 10:15:00 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:50.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:50.994 10:15:00 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:50.994 10:15:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:51.253 [2024-12-11 10:15:00.609007] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:36:51.253 [2024-12-11 10:15:00.609063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380739 ] 00:36:51.253 [2024-12-11 10:15:00.688169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.253 [2024-12-11 10:15:00.729206] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:51.511 10:15:00 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:51.511 10:15:00 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:51.511 10:15:00 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:51.511 10:15:00 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.511 10:15:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:51.511 [2024-12-11 10:15:00.951348] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:51.511 null0 00:36:51.511 [2024-12-11 10:15:00.983391] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:51.511 [2024-12-11 10:15:00.983683] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:51.511 10:15:01 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.511 10:15:01 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:51.511 575083339 00:36:51.511 10:15:01 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:51.511 1063014741 00:36:51.511 10:15:01 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=380888 00:36:51.511 10:15:01 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 380888 /var/tmp/bperf.sock 00:36:51.511 10:15:01 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:51.511 10:15:01 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 380888 ']' 00:36:51.511 10:15:01 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:51.511 10:15:01 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:51.511 10:15:01 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:51.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:51.512 10:15:01 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:51.512 10:15:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:51.512 [2024-12-11 10:15:01.055380] Starting SPDK v25.01-pre git sha1 7e2e68263 / DPDK 24.03.0 initialization... 00:36:51.512 [2024-12-11 10:15:01.055426] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380888 ] 00:36:51.770 [2024-12-11 10:15:01.118089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.770 [2024-12-11 10:15:01.159083] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:51.770 10:15:01 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:51.770 10:15:01 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:51.770 10:15:01 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:51.770 10:15:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:52.029 10:15:01 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:52.029 10:15:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:52.286 10:15:01 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:52.286 10:15:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:52.286 [2024-12-11 10:15:01.847420] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:52.545 nvme0n1 00:36:52.545 10:15:01 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:52.545 10:15:01 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:52.545 10:15:01 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:52.545 10:15:01 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:52.545 10:15:01 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:52.545 10:15:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.803 10:15:02 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:52.803 10:15:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:52.803 10:15:02 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:52.803 10:15:02 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:52.803 10:15:02 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.803 10:15:02 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:52.803 10:15:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.803 10:15:02 keyring_linux -- keyring/linux.sh@25 -- # sn=575083339 00:36:52.803 10:15:02 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:52.803 10:15:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:52.803 10:15:02 keyring_linux -- keyring/linux.sh@26 -- # [[ 575083339 == \5\7\5\0\8\3\3\3\9 ]] 00:36:52.803 10:15:02 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 575083339 00:36:52.803 10:15:02 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:52.803 10:15:02 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:53.062 Running I/O for 1 seconds... 00:36:53.997 21867.00 IOPS, 85.42 MiB/s 00:36:53.997 Latency(us) 00:36:53.997 [2024-12-11T09:15:03.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:53.997 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:53.997 nvme0n1 : 1.01 21867.18 85.42 0.00 0.00 5834.24 1927.07 7115.34 00:36:53.997 [2024-12-11T09:15:03.572Z] =================================================================================================================== 00:36:53.997 [2024-12-11T09:15:03.572Z] Total : 21867.18 85.42 0.00 0.00 5834.24 1927.07 7115.34 00:36:53.997 { 00:36:53.997 "results": [ 00:36:53.997 { 00:36:53.997 "job": "nvme0n1", 00:36:53.998 "core_mask": "0x2", 00:36:53.998 "workload": "randread", 00:36:53.998 "status": "finished", 00:36:53.998 "queue_depth": 128, 00:36:53.998 "io_size": 4096, 00:36:53.998 "runtime": 1.005891, 00:36:53.998 "iops": 21867.180440027798, 00:36:53.998 "mibps": 85.41867359385859, 00:36:53.998 "io_failed": 0, 00:36:53.998 "io_timeout": 0, 00:36:53.998 "avg_latency_us": 5834.238913741892, 00:36:53.998 "min_latency_us": 1927.0704761904763, 00:36:53.998 "max_latency_us": 7115.337142857143 00:36:53.998 } 00:36:53.998 ], 00:36:53.998 "core_count": 1 00:36:53.998 } 00:36:53.998 10:15:03 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:53.998 10:15:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:54.256 10:15:03 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:54.256 10:15:03 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:54.256 10:15:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:54.256 10:15:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:54.256 10:15:03 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:54.256 10:15:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.515 10:15:03 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:54.515 10:15:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:54.515 10:15:03 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:54.515 10:15:03 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:54.515 10:15:03 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:36:54.515 10:15:03 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:54.515 10:15:03 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:54.515 10:15:03 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:54.515 10:15:03 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:54.515 10:15:03 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:54.515 10:15:03 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:54.515 10:15:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:54.515 [2024-12-11 10:15:04.022212] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:54.515 [2024-12-11 10:15:04.022908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cc580 (107): Transport endpoint is not connected 00:36:54.515 [2024-12-11 10:15:04.023902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cc580 (9): Bad file descriptor 00:36:54.515 [2024-12-11 10:15:04.024903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:54.515 [2024-12-11 10:15:04.024913] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:54.515 [2024-12-11 10:15:04.024920] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:54.515 [2024-12-11 10:15:04.024929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:54.515 request: 00:36:54.515 { 00:36:54.515 "name": "nvme0", 00:36:54.515 "trtype": "tcp", 00:36:54.515 "traddr": "127.0.0.1", 00:36:54.515 "adrfam": "ipv4", 00:36:54.515 "trsvcid": "4420", 00:36:54.515 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:54.515 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:54.515 "prchk_reftag": false, 00:36:54.515 "prchk_guard": false, 00:36:54.515 "hdgst": false, 00:36:54.515 "ddgst": false, 00:36:54.515 "psk": ":spdk-test:key1", 00:36:54.515 "allow_unrecognized_csi": false, 00:36:54.515 "method": "bdev_nvme_attach_controller", 00:36:54.515 "req_id": 1 00:36:54.515 } 00:36:54.515 Got JSON-RPC error response 00:36:54.515 response: 00:36:54.515 { 00:36:54.515 "code": -5, 00:36:54.515 "message": "Input/output error" 00:36:54.515 } 00:36:54.515 10:15:04 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:36:54.515 10:15:04 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:54.515 10:15:04 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:54.515 10:15:04 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:54.515 10:15:04 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:54.515 10:15:04 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:54.515 10:15:04 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:54.515 10:15:04 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:54.515 10:15:04 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:54.515 10:15:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:54.515 10:15:04 keyring_linux -- keyring/linux.sh@33 -- # sn=575083339 00:36:54.515 10:15:04 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 575083339 00:36:54.515 1 links removed 00:36:54.515 10:15:04 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:54.515 10:15:04 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:54.515 10:15:04 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:54.515 10:15:04 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:54.515 10:15:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:54.515 10:15:04 keyring_linux -- keyring/linux.sh@33 -- # sn=1063014741 00:36:54.515 10:15:04 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1063014741 00:36:54.515 1 links removed 00:36:54.515 10:15:04 keyring_linux -- keyring/linux.sh@41 -- # killprocess 380888 00:36:54.515 10:15:04 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 380888 ']' 00:36:54.515 10:15:04 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 380888 00:36:54.515 10:15:04 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:54.515 10:15:04 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:54.515 10:15:04 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 380888 00:36:54.774 10:15:04 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:54.774 10:15:04 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:54.774 10:15:04 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 380888' 00:36:54.774 killing process with pid 380888 00:36:54.774 10:15:04 keyring_linux -- common/autotest_common.sh@973 -- # kill 380888 00:36:54.774 Received shutdown signal, test time was about 1.000000 seconds 00:36:54.774 00:36:54.774 Latency(us) 00:36:54.774 [2024-12-11T09:15:04.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.774 [2024-12-11T09:15:04.349Z] =================================================================================================================== 00:36:54.774 [2024-12-11T09:15:04.349Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:54.774 10:15:04 keyring_linux -- common/autotest_common.sh@978 -- # wait 380888 00:36:54.774 10:15:04 keyring_linux -- keyring/linux.sh@42 -- # killprocess 380739 00:36:54.774 10:15:04 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 380739 ']' 00:36:54.774 10:15:04 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 380739 00:36:54.774 10:15:04 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:54.774 10:15:04 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:54.774 10:15:04 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 380739 00:36:54.775 10:15:04 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:54.775 10:15:04 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:54.775 10:15:04 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 380739' 00:36:54.775 killing process with pid 380739 00:36:54.775 10:15:04 keyring_linux -- common/autotest_common.sh@973 -- # kill 380739 00:36:54.775 10:15:04 keyring_linux -- common/autotest_common.sh@978 -- # wait 380739 00:36:55.033 00:36:55.033 real 0m4.352s 00:36:55.033 user 0m8.233s 00:36:55.033 sys 0m1.411s 00:36:55.033 10:15:04 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:55.033 10:15:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:55.033 ************************************ 00:36:55.033 END TEST keyring_linux 00:36:55.033 ************************************ 00:36:55.291 10:15:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:55.291 10:15:04 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:55.291 10:15:04 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:55.291 10:15:04 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:55.291 10:15:04 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:55.291 10:15:04 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:55.291 10:15:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:55.291 10:15:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:55.291 10:15:04 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:55.291 10:15:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:55.291 10:15:04 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:55.291 10:15:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:55.291 10:15:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:55.291 10:15:04 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:55.291 10:15:04 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:55.291 10:15:04 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:55.291 10:15:04 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:55.291 10:15:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:55.291 10:15:04 -- common/autotest_common.sh@10 -- # set +x 00:36:55.291 10:15:04 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:55.291 10:15:04 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:55.291 10:15:04 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:55.291 10:15:04 -- common/autotest_common.sh@10 -- # set +x 00:37:00.564 INFO: APP EXITING 00:37:00.564 INFO: killing all VMs 00:37:00.564 INFO: killing vhost app 00:37:00.564 INFO: EXIT DONE 00:37:03.860 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:37:03.860 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:37:03.860 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:37:03.860 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:37:03.860 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:37:03.860 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:37:03.860 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:37:03.860 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:37:03.860 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:37:03.860 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:37:03.860 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:37:03.860 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:37:04.119 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:37:04.119 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:37:04.119 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:37:04.119 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:37:04.119 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:37:04.119 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:37:07.406 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:37:07.406 Cleaning 00:37:07.406 Removing: /var/run/dpdk/spdk0/config 00:37:07.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:07.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:07.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:07.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:07.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:07.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:07.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:07.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:07.406 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:07.406 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:07.406 Removing: /var/run/dpdk/spdk1/config 00:37:07.406 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:07.406 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:07.406 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:07.406 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:07.406 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:07.406 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:07.406 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:07.406 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:07.406 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:07.406 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:07.406 Removing: /var/run/dpdk/spdk2/config 00:37:07.406 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:07.406 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:07.406 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:07.406 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:07.406 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:07.406 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:07.665 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:07.665 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:07.665 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:07.665 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:07.665 Removing: /var/run/dpdk/spdk3/config 00:37:07.665 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:07.665 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:07.665 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:07.665 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:07.665 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:07.665 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:07.665 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:07.665 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:07.665 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:07.665 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:07.665 Removing: /var/run/dpdk/spdk4/config 00:37:07.665 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:07.665 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:07.665 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:07.665 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:07.665 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:07.665 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:07.665 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:07.666 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:07.666 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:07.666 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:07.666 Removing: /dev/shm/bdev_svc_trace.1 00:37:07.666 Removing: /dev/shm/nvmf_trace.0 00:37:07.666 Removing: /dev/shm/spdk_tgt_trace.pid4064961 00:37:07.666 Removing: /var/run/dpdk/spdk0 00:37:07.666 Removing: /var/run/dpdk/spdk1 00:37:07.666 Removing: /var/run/dpdk/spdk2 00:37:07.666 Removing: /var/run/dpdk/spdk3 00:37:07.666 Removing: /var/run/dpdk/spdk4 00:37:07.666 Removing: /var/run/dpdk/spdk_pid100198 00:37:07.666 Removing: /var/run/dpdk/spdk_pid101173 00:37:07.666 Removing: /var/run/dpdk/spdk_pid101542 00:37:07.666 Removing: /var/run/dpdk/spdk_pid103742 00:37:07.666 Removing: /var/run/dpdk/spdk_pid104229 00:37:07.666 Removing: /var/run/dpdk/spdk_pid104947 00:37:07.666 Removing: /var/run/dpdk/spdk_pid109466 00:37:07.666 Removing: /var/run/dpdk/spdk_pid115304 00:37:07.666 Removing: /var/run/dpdk/spdk_pid115305 00:37:07.666 Removing: /var/run/dpdk/spdk_pid115306 00:37:07.666 Removing: /var/run/dpdk/spdk_pid119561 00:37:07.666 Removing: /var/run/dpdk/spdk_pid128936 00:37:07.666 Removing: /var/run/dpdk/spdk_pid133062 00:37:07.666 Removing: /var/run/dpdk/spdk_pid139498 00:37:07.666 Removing: /var/run/dpdk/spdk_pid140635 00:37:07.666 Removing: /var/run/dpdk/spdk_pid142401 00:37:07.666 Removing: /var/run/dpdk/spdk_pid144110 00:37:07.666 Removing: /var/run/dpdk/spdk_pid149091 00:37:07.666 Removing: /var/run/dpdk/spdk_pid153923 00:37:07.666 Removing: /var/run/dpdk/spdk_pid158201 00:37:07.666 Removing: /var/run/dpdk/spdk_pid166532 00:37:07.666 Removing: /var/run/dpdk/spdk_pid166548 00:37:07.666 Removing: /var/run/dpdk/spdk_pid171722 00:37:07.666 Removing: /var/run/dpdk/spdk_pid171947 00:37:07.666 Removing: /var/run/dpdk/spdk_pid172175 00:37:07.666 Removing: /var/run/dpdk/spdk_pid172626 00:37:07.666 Removing: /var/run/dpdk/spdk_pid172632 00:37:07.666 Removing: /var/run/dpdk/spdk_pid177365 00:37:07.666 Removing: /var/run/dpdk/spdk_pid177927 00:37:07.925 Removing: /var/run/dpdk/spdk_pid182779 00:37:07.925 Removing: /var/run/dpdk/spdk_pid185455 00:37:07.925 Removing: /var/run/dpdk/spdk_pid191269 00:37:07.925 Removing: /var/run/dpdk/spdk_pid197611 00:37:07.925 Removing: /var/run/dpdk/spdk_pid206582 00:37:07.925 Removing: /var/run/dpdk/spdk_pid214152 00:37:07.925 Removing: /var/run/dpdk/spdk_pid214177 00:37:07.925 Removing: /var/run/dpdk/spdk_pid22542 00:37:07.925 Removing: /var/run/dpdk/spdk_pid234494 00:37:07.925 Removing: /var/run/dpdk/spdk_pid235161 00:37:07.925 Removing: /var/run/dpdk/spdk_pid235642 00:37:07.925 Removing: /var/run/dpdk/spdk_pid236187 00:37:07.925 Removing: /var/run/dpdk/spdk_pid236852 00:37:07.925 Removing: /var/run/dpdk/spdk_pid237429 00:37:07.925 Removing: /var/run/dpdk/spdk_pid237998 00:37:07.925 Removing: /var/run/dpdk/spdk_pid238473 00:37:07.925 Removing: /var/run/dpdk/spdk_pid243100 00:37:07.925 Removing: /var/run/dpdk/spdk_pid243487 00:37:07.925 Removing: /var/run/dpdk/spdk_pid250256 00:37:07.925 Removing: /var/run/dpdk/spdk_pid250318 00:37:07.925 Removing: /var/run/dpdk/spdk_pid256217 00:37:07.925 Removing: /var/run/dpdk/spdk_pid260720 00:37:07.925 Removing: /var/run/dpdk/spdk_pid270872 00:37:07.925 Removing: /var/run/dpdk/spdk_pid271433 00:37:07.925 Removing: /var/run/dpdk/spdk_pid276059 00:37:07.925 Removing: /var/run/dpdk/spdk_pid276436 00:37:07.925 Removing: /var/run/dpdk/spdk_pid281021 00:37:07.925 Removing: /var/run/dpdk/spdk_pid28311 00:37:07.925 Removing: /var/run/dpdk/spdk_pid287111 00:37:07.925 Removing: /var/run/dpdk/spdk_pid289784 00:37:07.925 Removing: /var/run/dpdk/spdk_pid301051 00:37:07.925 Removing: /var/run/dpdk/spdk_pid310643 00:37:07.925 Removing: /var/run/dpdk/spdk_pid312262 00:37:07.925 Removing: /var/run/dpdk/spdk_pid313136 00:37:07.925 Removing: /var/run/dpdk/spdk_pid330658 00:37:07.925 Removing: /var/run/dpdk/spdk_pid334939 00:37:07.925 Removing: /var/run/dpdk/spdk_pid337711 00:37:07.925 Removing: /var/run/dpdk/spdk_pid34538 00:37:07.925 Removing: /var/run/dpdk/spdk_pid346484 00:37:07.925 Removing: /var/run/dpdk/spdk_pid346489 00:37:07.925 Removing: /var/run/dpdk/spdk_pid352108 00:37:07.925 Removing: /var/run/dpdk/spdk_pid354055 00:37:07.925 Removing: /var/run/dpdk/spdk_pid355999 00:37:07.925 Removing: /var/run/dpdk/spdk_pid357242 00:37:07.925 Removing: /var/run/dpdk/spdk_pid359194 00:37:07.925 Removing: /var/run/dpdk/spdk_pid360251 00:37:07.925 Removing: /var/run/dpdk/spdk_pid369880 00:37:07.925 Removing: /var/run/dpdk/spdk_pid370332 00:37:07.925 Removing: /var/run/dpdk/spdk_pid370991 00:37:07.925 Removing: /var/run/dpdk/spdk_pid373382 00:37:07.925 Removing: /var/run/dpdk/spdk_pid373939 00:37:07.925 Removing: /var/run/dpdk/spdk_pid374505 00:37:07.925 Removing: /var/run/dpdk/spdk_pid378561 00:37:07.925 Removing: /var/run/dpdk/spdk_pid378774 00:37:07.925 Removing: /var/run/dpdk/spdk_pid380274 00:37:07.925 Removing: /var/run/dpdk/spdk_pid380739 00:37:07.925 Removing: /var/run/dpdk/spdk_pid380888 00:37:07.925 Removing: /var/run/dpdk/spdk_pid4062831 00:37:07.925 Removing: /var/run/dpdk/spdk_pid4063889 00:37:07.925 Removing: /var/run/dpdk/spdk_pid4064961 00:37:07.925 Removing: /var/run/dpdk/spdk_pid4065717 00:37:07.925 Removing: /var/run/dpdk/spdk_pid4066646 00:37:07.925 Removing: /var/run/dpdk/spdk_pid4066834 00:37:07.925 Removing: /var/run/dpdk/spdk_pid4068165 00:37:07.925 Removing: /var/run/dpdk/spdk_pid4068239 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4068588 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4070082 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4071450 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4071847 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4072073 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4072447 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4072739 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4072994 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4073238 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4073517 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4074257 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4077223 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4077693 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4077748 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4077949 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4078305 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4078451 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4078939 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4078947 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4079208 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4079383 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4079490 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4079695 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4080108 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4080296 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4080624 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4084988 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4089730 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4100154 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4100839 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4105581 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4105830 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4110689 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4117413 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4120222 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4131217 00:37:08.184 Removing: /var/run/dpdk/spdk_pid41405 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4140981 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4142796 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4143706 00:37:08.184 Removing: /var/run/dpdk/spdk_pid41466 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4162439 00:37:08.184 Removing: /var/run/dpdk/spdk_pid4166792 00:37:08.184 Removing: /var/run/dpdk/spdk_pid42208 00:37:08.184 Removing: /var/run/dpdk/spdk_pid43054 00:37:08.184 Removing: /var/run/dpdk/spdk_pid43957 00:37:08.184 Removing: /var/run/dpdk/spdk_pid44497 00:37:08.184 Removing: /var/run/dpdk/spdk_pid44639 00:37:08.184 Removing: /var/run/dpdk/spdk_pid44863 00:37:08.184 Removing: /var/run/dpdk/spdk_pid44879 00:37:08.184 Removing: /var/run/dpdk/spdk_pid44916 00:37:08.184 Removing: /var/run/dpdk/spdk_pid45784 00:37:08.184 Removing: /var/run/dpdk/spdk_pid46684 00:37:08.184 Removing: /var/run/dpdk/spdk_pid47591 00:37:08.184 Removing: /var/run/dpdk/spdk_pid48051 00:37:08.184 Removing: /var/run/dpdk/spdk_pid48159 00:37:08.184 Removing: /var/run/dpdk/spdk_pid48494 00:37:08.184 Removing: /var/run/dpdk/spdk_pid49499 00:37:08.184 Removing: /var/run/dpdk/spdk_pid50521 00:37:08.184 Removing: /var/run/dpdk/spdk_pid59199 00:37:08.184 Removing: /var/run/dpdk/spdk_pid88527 00:37:08.184 Removing: /var/run/dpdk/spdk_pid93503 00:37:08.184 Removing: /var/run/dpdk/spdk_pid95087 00:37:08.185 Removing: /var/run/dpdk/spdk_pid97335 00:37:08.185 Removing: /var/run/dpdk/spdk_pid97432 00:37:08.185 Removing: /var/run/dpdk/spdk_pid97660 00:37:08.444 Removing: /var/run/dpdk/spdk_pid97847 00:37:08.444 Removing: /var/run/dpdk/spdk_pid98400 00:37:08.444 Clean 00:37:08.444 10:15:17 -- common/autotest_common.sh@1453 -- # return 0 00:37:08.444 10:15:17 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:37:08.444 10:15:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:08.444 10:15:17 -- common/autotest_common.sh@10 -- # set +x 00:37:08.444 10:15:17 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:37:08.444 10:15:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:08.444 10:15:17 -- common/autotest_common.sh@10 -- # set +x 00:37:08.444 10:15:17 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:08.444 10:15:17 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:08.444 10:15:17 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:08.444 10:15:17 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:37:08.444 10:15:17 -- spdk/autotest.sh@398 -- # hostname 00:37:08.444 10:15:17 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-03 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:08.703 geninfo: WARNING: invalid characters removed from testname! 00:37:30.626 10:15:38 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:32.001 10:15:41 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:33.902 10:15:43 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:35.803 10:15:45 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:37.703 10:15:46 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:39.604 10:15:48 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:41.630 10:15:50 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:41.630 10:15:50 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:41.630 10:15:50 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:37:41.630 10:15:50 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:41.630 10:15:50 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:41.630 10:15:50 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:41.630 + [[ -n 3984119 ]] 00:37:41.630 + sudo kill 3984119 00:37:41.639 [Pipeline] } 00:37:41.653 [Pipeline] // stage 00:37:41.658 [Pipeline] } 00:37:41.672 [Pipeline] // timeout 00:37:41.677 [Pipeline] } 00:37:41.691 [Pipeline] // catchError 00:37:41.696 [Pipeline] } 00:37:41.712 [Pipeline] // wrap 00:37:41.718 [Pipeline] } 00:37:41.731 [Pipeline] // catchError 00:37:41.741 [Pipeline] stage 00:37:41.743 [Pipeline] { (Epilogue) 00:37:41.756 [Pipeline] catchError 00:37:41.758 [Pipeline] { 00:37:41.772 [Pipeline] echo 00:37:41.773 Cleanup processes 00:37:41.780 [Pipeline] sh 00:37:42.064 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:42.065 392566 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:42.078 [Pipeline] sh 00:37:42.360 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:42.361 ++ grep -v 'sudo pgrep' 00:37:42.361 ++ awk '{print $1}' 00:37:42.361 + sudo kill -9 00:37:42.361 + true 00:37:42.372 [Pipeline] sh 00:37:42.654 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:54.865 [Pipeline] sh 00:37:55.149 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:55.149 Artifacts sizes are good 00:37:55.163 [Pipeline] archiveArtifacts 00:37:55.170 Archiving artifacts 00:37:55.288 [Pipeline] sh 00:37:55.573 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:55.586 [Pipeline] cleanWs 00:37:55.595 [WS-CLEANUP] Deleting project workspace... 00:37:55.595 [WS-CLEANUP] Deferred wipeout is used... 00:37:55.602 [WS-CLEANUP] done 00:37:55.604 [Pipeline] } 00:37:55.620 [Pipeline] // catchError 00:37:55.630 [Pipeline] sh 00:37:55.913 + logger -p user.info -t JENKINS-CI 00:37:55.922 [Pipeline] } 00:37:55.934 [Pipeline] // stage 00:37:55.939 [Pipeline] } 00:37:55.952 [Pipeline] // node 00:37:55.956 [Pipeline] End of Pipeline 00:37:55.992 Finished: SUCCESS